_id
stringlengths
40
40
text
stringlengths
0
10k
d1ee87290fa827f1217b8fa2bccb3485da1a300e
Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.
649197627a94fc003384fb743cfd78cdf12b3306
0b440695c822a8e35184fb2f60dcdaa8a6de84ae
The recent success of emerging RGB-D cameras such as the Kinect sensor depicts a broad prospect of 3-D data-based computer applications. However, due to the lack of a standard testing database, it is difficult to evaluate how the face recognition technology can benefit from this up-to-date imaging sensor. In order to establish the connection between the Kinect and face recognition research, in this paper, we present the first publicly available face database (i.e., KinectFaceDB1) based on the Kinect sensor. The database consists of different data modalities (well-aligned and processed 2-D, 2.5-D, 3-D, and video-based face data) and multiple facial variations. We conducted benchmark evaluations on the proposed database using standard face recognition methods, and demonstrated the gain in performance when integrating the depth data with the RGB data via score-level fusion. We also compared the 3-D images of Kinect (from the KinectFaceDB) with the traditional high-quality 3-D scans (from the FRGC database) in the context of face biometrics, which reveals the imperative needs of the proposed database for face recognition research.
a85275f12472ecfbf4f4f00a61514b0773923b86
Advances in wireless technology and supporting infrastructure provide unprecedented opportunity for ubiquitous real-time healthcare and fitness monitoring without constraining the activities of the user. Wirelessly connected miniaturized sensors and actuators placed in, on, and around the body form a body area network for continuous, automated, and unobtrusive monitoring of physiological signs to support medical, lifestyle and entertainment applications. BAN technology is in the early stage of development, and several research challenges have to be overcome for it to be widely accepted. In this article we study the core set of application, functional, and technical requirements of the BAN. We also discuss fundamental research challenges such as scalability (in terms of data rate, power consumption, and duty cycle), antenna design, interference mitigation, coexistence, QoS, reliability, security, privacy, and energy efficiency. Several candidate technologies poised to address the emerging BAN market are evaluated, and their merits and demerits are highlighted. A brief overview of standardization activities relevant to BANs is also presented.
f4abebef4e39791f358618294cd8d040d7024399
This report describes an analysis of the Fitbit Flex ecosystem. Our objectives are to describe (1) the data Fitbit collects from its users, (2) the data Fitbit provides to its users, and (3) methods of recovering data not made available to device owners. Our analysis covers four distinct attack vectors. First, we analyze the security and privacy properties of the Fitbit device itself. Next, we observe the Bluetooth traffic sent between the Fitbit device and a smartphone or personal computer during synchronization. Third, we analyze the security of the Fitbit Android app. Finally, we study the security properties of the network traffic between the Fitbit smartphone or computer application and the Fitbit web service. We provide evidence that Fitbit unnecessarily obtains information about nearby Flex devices under certain circumstances. We further show that Fitbit does not provide device owners with all of the data collected. In fact, we find evidence of per-minute activity data that is sent to the Fitbit web service but not provided to the owner. We also discovered that MAC addresses on Fitbit devices are never changed, enabling usercorrelation attacks. BTLE credentials are also exposed on the network during device pairing over TLS, which might be intercepted by MITM attacks. Finally, we demonstrate that actual user activity data is authenticated and not provided in plaintext on an end-to-end basis from the device to the Fitbit web service.
3007a8f5416404432166ff3f0158356624d282a1
Graph abstraction is essential for many applications from finding a shortest path to executing complex machine learning (ML) algorithms like collaborative filtering. Graph construction from raw data for various applications is becoming challenging, due to exponential growth in data, as well as the need for large scale graph processing. Since graph construction is a data-parallel problem, MapReduce is well-suited for this task. We developed GraphBuilder, a scalable framework for graph Extract-Transform-Load (ETL), to offload many of the complexities of graph construction, including graph formation, tabulation, transformation, partitioning, output formatting, and serialization. GraphBuilder is written in Java, for ease of programming, and it scales using the MapReduce model. In this paper, we describe the motivation for GraphBuilder, its architecture, MapReduce algorithms, and performance evaluation of the framework. Since large graphs should be partitioned over a cluster for storing and processing and partitioning methods have significant performance impacts, we develop several graph partitioning methods and evaluate their performance. We also open source the framework at https://01.org/graphbuilder/.
2e526c2fac79c080b818b304485ddf84d09cf08b
Temporal data mining aims at finding patterns in historical data. Our work proposes an approach to extract temporal patterns from data to predict the occurrence of target events, such as computer attacks on host networks, or fraudulent transactions in financial institutions. Our problem formulation exhibits two major challenges: 1) we assume events being characterized by categorical features and displaying uneven inter-arrival times; such an assumption falls outside the scope of classical time-series analysis, 2) we assume target events are highly infrequent; predictive techniques must deal with the class-imbalance problem. We propose an efficient algorithm that tackles the challenges above by transforming the event prediction problem into a search for all frequent eventsets preceding target events. The class imbalance problem is overcome by a search for patterns on the minority class exclusively; the discrimination power of patterns is then validated against other classes. Patterns are then combined into a rule-based model for prediction. Our experimental analysis indicates the types of event sequences where target events can be accurately predicted.
0a54d2f49bda694071bbf43d8e653f5adf85be19
Data mining systems aim to discover patterns and extract useful information from facts recorded in databases. A widely adopted approach to this objective is to apply various machine learning algorithms to compute descriptive models of the available data. Here, we explore one of the main challenges in this research area, the development of techniques that scale up to large and possibly physically distributed databases. Meta-learning is a technique that seeks to compute higher-level classifiers (or classification models), called meta-classifiers, that integrate in some p rincipled fashion multiple classifiers computed separately over different databas es. This study, describes meta-learning and presents the JAM system (Java Ag ents for Meta-learning), an agent-based meta-learning system for large-scale data mining applications. Specifically, it identifies and addresses several impor tant desiderata for distributed data mining systems that stem from thei r additional complexity compared to centralized or host-based systems. Distributed systems may need to deal with heterogenous platforms, with multiple databases and (possibly) different schemas, with the design and imp lementation of scalable and effective protocols for communicating among the data sites, and the selective and efficient use of the information that is gat hered from other peer data sites. Other important problems, intrinsic wi thin ∗Supported in part by an IBM fellowship. data mining systems that must not be ignored, include, first, the abil ity to take advantage of newly acquired information that was not previously av ailable when models were computed and combine it with existing models, and second, the flexibility to incorporate new machine learning methods and d ta mining technologies. We explore these issues within the context of JAM and evaluate various proposed solutions through extensive empirical st udie .
b00672fc5ff99434bf5347418a2d2762a3bb2639
Embedded devices have become ubiquitous, and they are used in a range of privacy-sensitive and security-critical applications. Most of these devices run proprietary software, and little documentation is available about the software’s inner workings. In some cases, the cost of the hardware and protection mechanisms might make access to the devices themselves infeasible. Analyzing the software that is present in such environments is challenging, but necessary, if the risks associated with software bugs and vulnerabilities must be avoided. As a matter of fact, recent studies revealed the presence of backdoors in a number of embedded devices available on the market. In this paper, we present Firmalice, a binary analysis framework to support the analysis of firmware running on embedded devices. Firmalice builds on top of a symbolic execution engine, and techniques, such as program slicing, to increase its scalability. Furthermore, Firmalice utilizes a novel model of authentication bypass flaws, based on the attacker’s ability to determine the required inputs to perform privileged operations. We evaluated Firmalice on the firmware of three commercially-available devices, and were able to detect authentication bypass backdoors in two of them. Additionally, Firmalice was able to determine that the backdoor in the third firmware sample was not exploitable by an attacker without knowledge of a set of unprivileged credentials.
6949a33423051ce6fa5b08fb7d5f06ac9dcc721b
A case study on the theoretical and practical value of using process mining for the detection of fraudulent behavior in the procurement process Abstract This thesis presents the results of a six month research period on process mining and fraud detection. This thesis aimed to answer the research question as to how process mining can be utilized in fraud detection and what the benefits of using process mining for fraud detection are. Based on a literature study it provides a discussion of the theory and application of process mining and its various aspects and techniques. Using both a literature study and an interview with a domain expert, the concepts of fraud and fraud detection are discussed. These results are combined with an analysis of existing case studies on the application of process mining and fraud detection to construct an initial setup of two case studies, in which process mining is applied to detect possible fraudulent behavior in the procurement process. Based on the experiences and results of these case studies, the 1+5+1 methodology is presented as a first step towards operationalizing principles with advice on how process mining techniques can be used in practice when trying to detect fraud. This thesis presents three conclusions: (1) process mining is a valuable addition to fraud detection, (2) using the 1+5+1 concept it was possible to detect indicators of possibly fraudulent behavior (3) the practical use of process mining for fraud detection is diminished by the poor performance of the current tools. The techniques and tools that do not suffer from performance issues are an addition, rather than a replacement, to regular data analysis techniques by providing either new, quicker, or more easily obtainable insights into the process and possible fraudulent behavior. iii Occam's Razor: " One should not increase, beyond what is necessary, the number of entities required to explain anything " iv Contents
8aef832372c6e3e83f10532f94f18bd26324d4fd
Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to retrieve the candidate answers from Freebase, and then infer over Wikipedia to validate these answers. Experiments on the WebQuestions question answering dataset show that our method achieves an F1 of 53.3%, a substantial improvement over the state-of-the-art.
16edc3faf625fd437aaca1527e8821d979354fba
Well-being is a complex construct that concerns optimal experience and functioning. Current research on well-being has been derived from two general perspectives: the hedonic approach, which focuses on happiness and defines well-being in terms of pleasure attainment and pain avoidance; and the eudaimonic approach, which focuses on meaning and self-realization and defines well-being in terms of the degree to which a person is fully functioning. These two views have given rise to different research foci and a body of knowledge that is in some areas divergent and in others complementary. New methodological developments concerning multilevel modeling and construct comparisons are also allowing researchers to formulate new questions for the field. This review considers research from both perspectives concerning the nature of well-being, its antecedents, and its stability across time and culture.
ac8c2e1fa35e797824958ced835257cd49e1be9c
This paper reviews and assesses the emerging research literature on information technology and organizational learning. After discussing issues of meaning and measurement, we identify and assess two main streams of research: studies that apply organizational learning concepts to the process of implementing and using information technology in organizations; and studies concerned with the design of information technology applications to support organizational learning. From the former stream of research, we conclude that experience plays an important, yet indeterminate role in implementation success; learning is accomplished through both formal training and participation in practice; organizational knowledge barriers may be overcome by learning from other organizations; and that learning new technologies is a dynamic process characterized by relatively narrow windows of opportunity. From the latter stream, we conclude that conceptual designs for organizational memory information systems are a valuable contribution to artifact development; learning is enhanced through systems that support communication and discourse; and that information technologies have the potential to both enable and disable organizational learning. Currently, these two streams flow independently of each other, despite their close conceptual and practical links. We advise that future research on information technology and organizational learning proceeds in a more integrated fashion, recognizes the situated nature of organizational learning, focuses on distributed organizational memory, demonstrates the effectiveness of artifacts in practice, and looks for relevant research findings in related fields.
654d129eafc136bf5fccbc54e6c8078e87989ea8
In this work a multimode-beamforming 77-GHz frequency-modulated continuous-wave radar system is presented. Four transceiver chips with integrated inphase/quadrature modulators in the transmit path are used in order to simultaneously realize a short-range frequency-division multiple-access (FDMA) multiple-input multiple-output (MIMO) and a long-range transmit phased-array (PA) radar system with the same antennas. It combines the high angular resolution of FDMA MIMO radars and the high-gain and steerable beam of PA transmit antennas. Several measurements were carried out to show the potential benefits of using this concept for a linear antenna array with four antennas and methods of digital beamforming in the receive path.
60611349d1b6d64488a5a88a9193e62d9db27b71
This report reviews existing fatigue detection and prediction technologies. Data regarding the different technologies available were collected from a wide variety of worldwide sources. The first half of this report summarises the current state of research and development of the technologies and summarises the status of the technologies with respect to the key issues of sensitivity, reliability, validity and acceptability. The second half evaluates the role of the technologies in transportation, and comments on the place of the technologies vis-a-vis other enforcement and regulatory frameworks, especially in Australia and New Zealand. The report authors conclude that the hardware technologies should never be used as the company fatigue management system. Hardware technologies only have the potential to be a last ditch safety device. Nevertheless, the output of hardware technologies could usefully feed into company fatigue management systems to provide real time risk assessment. However, hardware technology output should never be the only input into a management system. Other inputs should at least come from validated software technologies, mutual assessment of fitness for duty and other risk assessments of work load, schedules and rosters. Purpose: For information: to provide an understanding of the place of fatigue detection and prediction technologies in the management of fatigue in drivers of heavy vehicles.
d26c517baa9d6acbb826611400019297df2476a9
0ee1916a0cb2dc7d3add086b5f1092c3d4beb38a
The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.
981fef7155742608b8b6673f4a9566158b76cd67
a6eb10b1d30b4547b04870a82ec0c65baf2198f8
40e06608324781f6de425617a870a103d4233d5c
Purpose The purpose of this research is to understand the mechanisms of knowledge management (KM) for innovation and provide an approach for enterprises to leverage KM activities into continuous innovation. Design/methodology/approach – By reviewing the literature from multidisciplinary fields, the concepts of knowledge, KM and innovation are investigated. The physical, human and technological perspectives of KM are distinguished with the identification of two core activities for innovation: knowledge creation and knowledge usage. Then an essential requirement for continuous innovation –an internalization phase is defined. The systems thinking and human-centred perspectives are adopted for providing a comprehensive understanding about the mechanisms of KM for innovation. Findings – A networking process of continuous innovation based on KM is proposed by incorporating the phase of internalization. According to the three perspectives of KM, three sources of organizational knowledge assets in innovation are identified. Then based on the two core activities of innovation, a meta-model and a macro process of KM are proposed to model the mechanisms of KM for continuous innovation. Then, in order to operationalize the KM mechanisms, a hierarchical model is constructed by integrating three sources of knowledge assets, the meta-model and the macro process into the process of continuous innovation. This model decomposes the complex relationships between knowledge and innovation into four layers. Practical implications – According to the lessons learned about KM practices in previous research, the three perspectives of KM should collaborate with each other for successful implementation of KM projects for innovation; and the hierarchical model provides a suitable architecture to implement systems of KM for innovation. Originality/value – The meta-model and macro process of KM explain how the next generation of KM can help the value creation and support the continuous innovation from the systems thinking perspective. The hierarchical model illustrates the complicate knowledge dynamics in the process of continuous innovation.
1dba1fa6dd287fde87823218d4f03559dde4e15b
This paper presents strategies and lessons learned from the use of natural language annotations to facilitate question answering in the START information access system.
77fbbb9ff612c48dad8313087b0e6ed03c31812a
Liquid crystal polymer (LCP) is a material that has gained attention as a potential high-performance microwave substrate and packaging material. This investigation uses several methods to determine the electrical properties of LCP for millimeter-wave frequencies. Microstrip ring resonators and cavity resonators are measured in order to characterize the dielectric constant (/spl epsi//sub r/) and loss tangent (tan/spl delta/) of LCP above 30 GHz. The measured dielectric constant is shown to be steady near 3.16, and the loss tangent stays below 0.0049. In addition, various transmission lines are fabricated on different LCP substrate thicknesses and the loss characteristics are given in decibels per centimeter from 2 to 110 GHz. Peak transmission-line losses at 110 GHz vary between 0.88-2.55 dB/cm, depending on the line type and geometry. These results show, for the first time, that LCP has excellent dielectric properties for applications extending through millimeter-wave frequencies.
cb84ef73db0a259b07289590f0dfcb9b8b9bbe79
This paper describes a hybrid radio frequency (RF) and piezoelectric thin film polyvinylidene fluoride (PVDF) vibration energy harvester for wearable devices. By exploiting the impedance characteristics of parasitic capacitances and discrete inductors, the proposed harvester not only scavenges 15 Hz vibration energy but also works as a 915 MHz flexible silver-ink RF dipole antenna. In addition, an interface circuit including a 6-stage Dickson RF-to-DC converter and a diode bridge rectifier to convert the RF and vibration outputs of the hybrid harvester into DC signals to power resistive loads is evaluated. A maximum DC output power of 20.9 μ”, when using the RF to DC converter and −8 dBm input RF power, is achieved at 36 % of the open-circuit output voltage while the DC power harvested from 3 g vibration excitation reaches a maximum of 2.8 μW at 51% of open-circuit voltage. Experimental results show that the tested hybrid harvesting system simultaneously generates 7.3 μW DC power, when the distance from the harvester to a 3 W EIRP 915 MHz transmitter is 5.5 m, and 1.8 μW DC power from a 1.8 g vibration acceleration peak.
d8e8bdd687dd588b71d92ff8f6018a1084f85437
Analogous to the way humans use the Internet, devices will be the main users in the Internet of Things (IoT) ecosystem. Therefore, device-to-device (D2D) communication is expected to be an intrinsic part of the IoT. Devices will communicate with each other autonomously without any centralized control and collaborate to gather, share, and forward information in a multihop manner. The ability to gather relevant information in real time is key to leveraging the value of the IoT as such information will be transformed into intelligence, which will facilitate the creation of an intelligent environment. Ultimately, the quality of the information gathered depends on how smart the devices are. In addition, these communicating devices will operate with different networking standards, may experience intermittent connectivity with each other, and many of them will be resource constrained. These characteristics open up several networking challenges that traditional routing protocols cannot solve. Consequently, devices will require intelligent routing protocols in order to achieve intelligent D2D communication. We present an overview of how intelligent D2D communication can be achieved in the IoT ecosystem. In particular, we focus on how state-of-the-art routing algorithms can achieve intelligent D2D communication in the IoT.
5e6035535d6d258a29598faf409b57a71ec28f21
766c251bd7686dd707acd500e80d7184929035c6
Traffic light detection (TLD) is a vital part of both intelligent vehicles and driving assistance systems (DAS). General for most TLDs is that they are evaluated on small and private datasets making it hard to determine the exact performance of a given method. In this paper we apply the state-of-the-art, real-time object detection system You Only Look Once, (YOLO) on the public LISA Traffic Light dataset available through the VIVA-challenge, which contain a high number of annotated traffic lights, captured in varying light and weather conditions.,,,,,,The YOLO object detector achieves an AUC of impressively 90.49% for daysequence1, which is an improvement of 50.32% compared to the latest ACF entry in the VIVAchallenge. Using the exact same training configuration as the ACF detector, the YOLO detector reaches an AUC of 58.3%, which is in an increase of 18.13%.
136b9952f29632ab3fa2bbf43fed277204e13cb5
Scene categorization is a fundamental problem in computer vision. However, scene understanding research has been constrained by the limited scope of currently-used databases which do not capture the full variety of scene categories. Whereas standard databases for object categorization contain hundreds of different classes of objects, the largest available dataset of scene categories contains only 15 classes. In this paper we propose the extensive Scene UNderstanding (SUN) database that contains 899 categories and 130,519 images. We use 397 well-sampled categories to evaluate numerous state-of-the-art algorithms for scene recognition and establish new bounds of performance. We measure human scene classification performance on the SUN database and compare this with computational methods. Additionally, we study a finer-grained scene representation to detect scenes embedded inside of larger scenes.
eb06182a2817d06e82612a0c32a6c843f01c6a03
This paper proposes a neural generative model, namely Table2Seq, to generate a natural language sentence based on a table. Specifically, the model maps a table to continuous vectors and then generates a natural language sentence by leveraging the semantics of a table. Since rare words, e.g., entities and values, usually appear in a table, we develop a flexible copying mechanism that selectively replicates contents from the table to the output sequence. We conduct extensive experiments to demonstrate the effectiveness of our Table2Seq model and the utility of the designed copying mechanism. On the WIKIBIO and SIMPLEQUESTIONS datasets, the Table2Seq model improves the state-of-the-art results from 34.70 to 40.26 and from 33.32 to 39.12 in terms of BLEU-4 scores, respectively. Moreover, we construct an open-domain dataset WIKITABLETEXT that includes 13 318 descriptive sentences for 4962 tables. Our Table2Seq model achieves a BLEU-4 score of 38.23 on WIKITABLETEXT outperforming template-based and language model based approaches. Furthermore, through experiments on 1 M table-query pairs from a search engine, our Table2Seq model considering the structured part of a table, i.e., table attributes and table cells, as additional information outperforms a sequence-to-sequence model considering only the sequential part of a table, i.e., table caption.
ea951c82efe26424e3ce0d167e01f59e5135a2da
The Timed Up and Go is a clinical test to assess mobility in the elderly and in Parkinson's disease. Lately instrumented versions of the test are being considered, where inertial sensors assess motion. To improve the pervasiveness, ease of use, and cost, we consider a smartphone's accelerometer as the measurement system. Several parameters (usually highly correlated) can be computed from the signals recorded during the test. To avoid redundancy and obtain the features that are most sensitive to the locomotor performance, a dimensionality reduction was performed through principal component analysis (PCA). Forty-nine healthy subjects of different ages were tested. PCA was performed to extract new features (principal components) which are not redundant combinations of the original parameters and account for most of the data variability. They can be useful for exploratory analysis and outlier detection. Then, a reduced set of the original parameters was selected through correlation analysis with the principal components. This set could be recommended for studies based on healthy adults. The proposed procedure could be used as a first-level feature selection in classification studies (i.e. healthy-Parkinson's disease, fallers-non fallers) and could allow, in the future, a complete system for movement analysis to be incorporated in a smartphone.
e467278d981ba30ab3b24235d09205e2aaba3d6f
The goal of this study was to develop and test a sequential mediational model explaining the negative relationship of passive leadership to employee well-being. Based on role stress theory, we posit that passive leadership will predict higher levels of role ambiguity, role conflict and role overload. Invoking Conservation of Resources theory, we further hypothesize that these role stressors will indirectly and negatively influence two aspects of employee well-being, namely overall mental health and overall work attitude, through psychological work fatigue. Using a probability sample of 2467 US workers, structural equation modelling supported the model by showing that role stressors and psychological work fatigue partially mediated the negative relationship between passive leadership and both aspects of employee well-being. The hypothesized, sequential indirect relationships explained 47.9% of the overall relationship between passive leadership and mental health and 26.6% of the overall relationship between passive leadership and overall work attitude. Copyright © 2016 John Wiley & Sons, Ltd.
9a86ae8e9b946dc6d957357e0670f262fa1ead9d
Article history: Received 22 August 2007 Accepted 29 February 2008 Available online xxxx
f8acaabc99801a89baa5a9eff445fc5922498dd0
Deep domain adaptation methods can reduce the distribution discrepancy by learning domain-invariant embedddings. However, these methods only focus on aligning the whole data distributions, without considering the classlevel relations among source and target images. Thus, a target embeddings of a bird might be aligned to source embeddings of an airplane. This semantic misalignment can directly degrade the classifier performance on the target dataset. To alleviate this problem, we present a similarity constrained alignment (SCA) method for unsupervised domain adaptation. When aligning the distributions in the embedding space, SCA enforces a similarity-preserving constraint to maintain class-level relations among the source and target images, i.e., if a source image and a target image are of the same class label, their corresponding embeddings are supposed to be aligned nearby, and vise versa. In the absence of target labels, we assign pseudo labels for target images. Given labeled source images and pseudo-labeled target images, the similarity-preserving constraint can be implemented by minimizing the triplet loss. With the joint supervision of domain alignment loss and similarity-preserving constraint, we train a network to obtain domain-invariant embeddings with two critical characteristics, intra-class compactness and inter-class separability. Extensive experiments conducted on the two datasets well demonstrate the effectiveness of SCA.
a3c3c084d4c30cf40e134314a5dcaf66b4019171
21aebb53a45ccac7f6763d9c47477092599f6be1
12e1923fb86ed06c702878bbed51b4ded2b16be1
In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes.
25b87d1d17adabe2923da63e0b93fb7d2bac73f7
The constant increase of attacks against networks and their resources (as recently shown by the CodeRed worm) causes a necessity to protect these valuable assets. Firewalls are now a common installation to repel intrusion attempts in the first place. Intrusion detection systems (IDS), which try to detect malicious activities instead of preventing them, offer additional protection when the first defense perimeter has been penetrated. ID systems attempt to pin down attacks by comparing collected data to predefined signatures known to be malicious (signature based) or to a model of legal behavior (anomaly based).Anomaly based systems have the advantage of being able to detect previously unknown attacks but they suffer from the difficulty to build a solid model of acceptable behavior and the high number of alarms caused by unusual but authorized activities. We present an approach that utilizes application specific knowledge of the network services that should be protected. This information helps to extend current, simple network traffic models to form an application model that allows to detect malicious content hidden in single network packets. We describe the features of our proposed model and present experimental data that underlines the efficiency of our systems.
10338babf0119e3dba196aef44fa717a1d9a06df
36e41cdfddd190d7861b91b04a515967fd1541d9
Received: 20 July 2012 Revised: 18 February 2013 2nd Revision: 28 June 2013 3rd Revision: 20 September 2013 4th Revision: 7 November 2013 Accepted: 1 February 2014 Abstract As the number of messages and social relationships embedded in social networking sites (SNS) increases, the amount of social information demanding a reaction from individuals increases as well. We observe that, as a consequence, SNS users feel they are giving too much social support to other SNS users. Drawing on social support theory (SST), we call this negative association with SNS usage ‘social overload’ and develop a latent variable to measure it. We then identify the theoretical antecedents and consequences of social overload and evaluate the social overload model empirically using interviews with 12 and a survey of 571 Facebook users. The results show that extent of usage, number of friends, subjective social support norms, and type of relationship (online-only vs offline friends) are factors that directly contribute to social overload while age has only an indirect effect. The psychological and behavioral consequences of social overload include feelings of SNS exhaustion by users, low levels of user satisfaction, and a high intention to reduce or even stop using SNS. The resulting theoretical implications for SST and SNS acceptance research are discussed and practical implications for organizations, SNS providers, and SNS users are drawn. European Journal of Information Systems advance online publication, 4 March 2014; doi:10.1057/ejis.2014.3; corrected online 11 March 2014
ffcb7146dce1aebf47a910b51a873cfec897d602
Scan and segmented scan are important data-parallel primitives for a wide range of applications. We present fast, work-efficient algorithms for these primitives on graphics processing units (GPUs). We use novel data representations that map well to the GPU architecture. Our algorithms exploit shared memory to improve memory performance. We further improve the performance of our algorithms by eliminating shared-memory bank conflicts and reducing the overheads in prior shared-memory GPU algorithms. Furthermore, our algorithms are designed to work well on general data sets, including segmented arrays with arbitrary segment lengths. We also present optimizations to improve the performance of segmented scans based on the segment lengths. We implemented our algorithms on a PC with an NVIDIA GeForce 8800 GPU and compared our results with prior GPU-based algorithms. Our results indicate up to 10x higher performance over prior algorithms on input sequences with millions of elements.
6a640438a4e50fa31943462eeca716413891a773
We present a new ranking algorithm that combines the strengt hs of two previous methods: boosted tree classification, and LambdaR ank, which has been shown to be empirically optimal for a widely used inform ation retrieval measure. The algorithm is based on boosted regression trees , although the ideas apply to any weak learners, and it is significantly fast er in both train and test phases than the state of the art, for comparable accu racy. We also show how to find the optimal linear combination for any two ran kers, and we use this method to solve the line search problem exactly du ring boosting. In addition, we show that starting with a previously tra ined model, and boosting using its residuals, furnishes an effective techn ique for model adaptation, and we give results for a particularly pressing prob lem in Web Search training rankers for markets for which only small amounts o f labeled data are available, given a ranker trained on much more data from a larger market.
72691b1adb67830a58bebdfdf213a41ecd38c0ba
We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with the state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training.
34d1ba9476ae474f1895dbd84e8dc82b233bc32e
1cdc4ad61825d3a7527b85630fe60e0585fb9347
Learning analytics is a significant area of technology-enhanced learning that has emerged during the last decade. This review of the field begins with an examination of the technological, educational and political factors that have driven the development of analytics in educational settings. It goes on to chart the emergence of learning analytics, including their origins in the 20th century, the development of data-driven analytics, the rise of learningfocused perspectives and the influence of national economic concerns. It next focuses on the relationships between learning analytics, educational data mining and academic analytics. Finally, it examines developing areas of learning analytics research, and identifies a series of future challenges.
f3ac0d94ba2374e46dfa3a13effcc540205faf21
49fd00a22f44a52f4699730403033416e0762e6d
860d3d4114711fa4ce9a5a4ccf362b80281cc981
This paper reports on a trade study we performed to support the development of a Cyber ontology from an initial malware ontology. The goals of the Cyber ontology effort are first described, followed by a discussion of the ontology development methodology used. The main body of the paper then follows, which is a description of the potential ontologies and standards that could be utilized to extend the Cyber ontology from its initially constrained malware focus. These resources include, in particular, Cyber and malware standards, schemas, and terminologies that directly contributed to the initial malware ontology effort. Other resources are upper (sometimes called 'foundational') ontologies. Core concepts that any Cyber ontology will extend have already been identified and rigorously defined in these foundational ontologies. However, for lack of space, this section is profoundly reduced. In addition, utility ontologies that are focused on time, geospatial, person, events, and network operations are briefly described. These utility ontologies can be viewed as specialized super-domain or even mid-level ontologies, since they span many, if not most, ontologies -including any Cyber ontology. An overall view of the ontological architecture used by the trade study is also given. The report on the trade study concludes with some proposed next steps in the iterative evolution of the
4767a0c9f7261a4265db650d3908c6dd1d10a076
Tracking-by-detection has proven to be the most successful strategy to address the task of tracking multiple targets in unconstrained scenarios [e.g. 40, 53, 55]. Traditionally, a set of sparse detections, generated in a preprocessing step, serves as input to a high-level tracker whose goal is to correctly associate these “dots” over time. An obvious short-coming of this approach is that most information available in image sequences is simply ignored by thresholding weak detection responses and applying non-maximum suppression. We propose a multi-target tracker that exploits low level image information and associates every (super)-pixel to a specific target or classifies it as background. As a result, we obtain a video segmentation in addition to the classical bounding-box representation in unconstrained, real-world videos. Our method shows encouraging results on many standard benchmark sequences and significantly outperforms state-of-the-art tracking-by-detection approaches in crowded scenes with long-term partial occlusions.
8eefd28eb47e72794bb0355d8abcbebaac9d8ab1
For several decades, statisticians have advocated using a c ombination of labeled and unlabeled data to train classifiers by estimating parameters o f a generative model through iterative Expectation-Maximization (EM) techniques. Thi s chapter explores the effectiveness of this approach when applied to the domain of text class ification. Text documents are represented here with a bag-of-words model, which leads to a generative classification model based on a mixture of multinomials. This model is an ext remely simplistic representation of the complexities of written text. This chapter explains and illustrates three key points about semi-supervised learning for text classifi cat on with generative models. First, despite the simplistic representation, some text do mains have a high positive correlation between generative model probability and classifica tion accuracy. In these domains, a straightforward application of EM with the naive Bayes tex t model works well. Second, some text domains do not have this correlation. Here we can ad opt a more expressive and appropriate generative model that does have a positive c orrelation. In these domains, semi-supervised learning again improves classification ac curacy. Finally, EM suffers from the problem of local maxima, especially in high dimension do mains such as text classification. We demonstrate that deterministic annealing, a varia nt of EM, can help overcome the problem of local maxima and increase classification accurac y further when the generative model is appropriate.
696ad1c38b588dae3295668a0fa34021c4481030
We present a method for training multi-label, massively multi-class image classification models, that is faster and more accurate than supervision via a sigmoid cross-entropy loss (logistic regression). Our method consists in embedding high-dimensional sparse labels onto a lower-dimensional dense sphere of unit-normed vectors, and treating the classification problem as a cosine proximity regression problem on this sphere. We test our method on a dataset of 300 million high-resolution images with 17,000 labels, where it yields considerably faster convergence, as well as a 7% higher mean average precision compared to logistic regression.
ad5974c04b316f4f379191e4dbea836fd766f47c
This paper reports on the benefits of largescale statistical language modeling in machine translation. A distributed infrastructure is proposed which we use to train on up to 2 trillion tokens, resulting in language models having up to 300 billion n-grams. It is capable of providing smoothed probabilities for fast, single-pass decoding. We introduce a new smoothing method, dubbed Stupid Backoff, that is inexpensive to train on large data sets and approaches the quality of Kneser-Ney Smoothing as the amount of training data increases.
6cb45af3db1de2ba5466aedcb698deb6c4bb4678
In this project, we are interested in building an end-to-end neural network architecture for the Question Answering task on the well-known Stanford Question Answering Dataset (SQuAD). Our implementation is motivated from a recent highperformance achieving method that combines coattention encoder with a dynamic pointing decoder known as Dynamic Coattention Network. We explored different ensemble and test decoding techniques that we believe might improve the performance of such systems.
e11d5a4edec55f5d5dc8ea25621ecbf89e9bccb7
The dependency of our society on networked computers has become frightening: In the economy, all-digital networks have turned from facilitators to drivers; as cyber-physical systems are coming of age, computer networks are now becoming the central nervous systems of our physical world—even of highly critical infrastructures such as the power grid. At the same time, the 24/7 availability and correct functioning of networked computers has become much more threatened: The number of sophisticated and highly tailored attacks on IT systems has significantly increased. Intrusion Detection Systems (IDSs) are a key component of the corresponding defense measures; they have been extensively studied and utilized in the past. Since conventional IDSs are not scalable to big company networks and beyond, nor to massively parallel attacks, Collaborative IDSs (CIDSs) have emerged. They consist of several monitoring components that collect and exchange data. Depending on the specific CIDS architecture, central or distributed analysis components mine the gathered data to identify attacks. Resulting alerts are correlated among multiple monitors in order to create a holistic view of the network monitored. This article first determines relevant requirements for CIDSs; it then differentiates distinct building blocks as a basis for introducing a CIDS design space and for discussing it with respect to requirements. Based on this design space, attacks that evade CIDSs and attacks on the availability of the CIDSs themselves are discussed. The entire framework of requirements, building blocks, and attacks as introduced is then used for a comprehensive analysis of the state of the art in collaborative intrusion detection, including a detailed survey and comparison of specific CIDS approaches.
720158a53b79667e39c2caf2f7ebb2670b848693
Preserving a person's privacy in an efficient manner is very important for critical, life-saving infrastructures like body sensor networks (BSN). This paper presents a novel key agreement scheme which allows two sensors in a BSN to agree to a common key generated using electrocardiogram (EKG) signals. This EKG-based key agreement (EKA) scheme aims to bring the "plug-n-play" paradigm to BSN security whereby simply deploying sensors on the subject can enable secure communication, without requiring any form of initialization such as pre-deployment. Analysis of the scheme based on real EKG data (obtained from MIT PhysioBank database) shows that keys resulting from EKA are: random, time variant, can be generated based on short-duration EKG measurements, identical for a given subject and different for separate individuals.
f692c692d3426cc663f3ec9be0c7025b670b2e5c
For many years, the IT industry has sought to accelerate the software development process by assembling new applications from existing software assets. However, true component-based reuse of the form Douglas Mcllroy envisaged in the 1960s is still the exception rather than the rule, and most of the systematic software reuse practiced today uses heavyweight approaches such as product-line engineering or domain-specific frameworks. By component, we mean any cohesive and compact unit of software functionality with a well-defined interface - from simple programming language classes to more complex artifacts such as Web services and Enterprise JavaBeans.
96ea8f0927f87ab4be3a7fd5a3b1dd38eeaa2ed6
A wideband and simple torus knot monopole antenna is presented in this letter. The antenna is fabricated using additive manufacturing technology, commonly known as 3-D printing. The antenna is mechanically simple to fabricate and has stable radiation pattern as well as input reflection coefficient below -10 dB over a frequency range of 1-2 GHz. A comparison of measured and simulated performance of the antenna is also presented.
206b204618640917f278e72bd0e2a881d8cec7ad
One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, "Expectation Propagation," unifies and generalizes two previous techniques: assumeddensity filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. The unification shows how both of these algorithms can be viewed as approximating the true posterior distribution with a simpler distribution, which is close in the sense of KL-divergence. Expectation Propagation exploits the best of both algorithms: the generality of assumed-density filtering and the accuracy of loopy belief propagation. Loopy belief propagation, because it propagates exact belief states, is useful for limited types of belief networks, such as purely discrete networks. Expectation Propagation approximates the belief states with expectations, such as means and variances, giving it much wider scope. Expectation Propagation also extends belief propagation in the opposite direction-propagating richer belief states which incorporate correlations between variables. This framework is demonstrated in a variety of statistical models using synthetic and real-world data. On Gaussian mixture problems, Expectation Propagation is found, for the same amount of computation, to be convincingly better than rival approximation techniques: Monte Carlo, Laplace's method, and variational Bayes. For pattern recognition, Expectation Propagation provides an algorithm for training Bayes Point Machine classifiers that is faster and more accurate than any previously known. The resulting classifiers outperform Support Vector Machines on several standard datasets, in addition to having a comparable training time. Expectation Propagation can also be used to choose an appropriate feature set for classification, via Bayesian model selection. Thesis Supervisor: Rosalind Picard Title: Associate Professor of Media Arts and Sciences
ad40428b40b051164ade961bc841a0da2c44515d
e4bd80adc5a3486c3a5c3d82cef91b70b67ae681
This article empirically tests five structural models of corporate bond pricing: those of Merton (1974), Geske (1977), Longstaff and Schwartz (1995), Leland and Toft (1996), and Collin-Dufresne and Goldstein (2001). We implement the models using a sample of 182 bond prices from firms with simple capital structures during the period 1986–1997. The conventional wisdom is that structural models do not generate spreads as high as those seen in the bond market, and true to expectations, we find that the predicted spreads in our implementation of the Merton model are too low. However, most of the other structural models predict spreads that are too high on average. Nevertheless, accuracy is a problem, as the newer models tend to severely overstate the credit risk of firms with high leverage or volatility and yet suffer from a spread underprediction problem with safer bonds. The Leland and Toft model is an exception in that it overpredicts spreads on most bonds, particularly those with high coupons. More accurate structural models must avoid features that increase the credit risk on the riskier bonds while scarcely affecting the spreads of the safest bonds.
da67375c8b6a250fbd5482bfbfce14f4eb7e506c
This survey presents an overview of the autonomous development of mental capabilities in computational agents. It does so based on a characterization of cognitive systems as systems which exhibit adaptive, anticipatory, and purposive goal-directed behavior. We present a broad survey of the various paradigms of cognition, addressing cognitivist (physical symbol systems) approaches, emergent systems approaches, encompassing connectionist, dynamical, and enactive systems, and also efforts to combine the two in hybrid systems. We then review several cognitive architectures drawn from these paradigms. In each of these areas, we highlight the implications and attendant problems of adopting a developmental approach, both from phylogenetic and ontogenetic points of view. We conclude with a summary of the key architectural features that systems capable of autonomous development of mental capabilities should exhibit
e7ee27816ade366584d411f4287e50bdc4771e56
55289d3feef4bc1e4ff17008120e371eb7f55a24
Recently a variety of LSTM-based conditional language models (LM) have been applied across a range of language generation tasks. In this work we study various model architectures and different ways to represent and aggregate the source information in an endto-end neural dialogue system framework. A method called snapshot learning is also proposed to facilitate learning from supervised sequential signals by applying a companion cross-entropy objective function to the conditioning vector. The experimental and analytical results demonstrate firstly that competition occurs between the conditioning vector and the LM, and the differing architectures provide different trade-offs between the two. Secondly, the discriminative power and transparency of the conditioning vector is key to providing both model interpretability and better performance. Thirdly, snapshot learning leads to consistent performance improvements independent of which architecture is used.
75c4b33059aa300e7b52d1b5dab37968ac927e89
A 2 times 1 dual-polarized L-probe stacked patch antenna array is presented. It has employed a novel technique to achieve high isolation between the two input ports. The proposed antenna has a 14-dB return loss bandwidth of 19.8%, which is ranged from 0.808 to 0.986 GHz, for both ports. Also, it has an input port isolation of more than 30 dB and an average gain of 10.5 dBi over this bandwidth. Moreover, its radiation patterns in the two principal planes have cross-polarization level of less than -15 dB within the 3-dB beamwidths across the passband. Due to these features, this antenna array is highly suitable for the outdoor base station that is required to cover the operating bandwidths of both CDMA800 and GSM900 mobile communication systems.
b891a8df3d7b4a6b73c9de7194f7341b00d93f6f
Recommender systems are promising for providing personalized favorite services. Collaborative filtering (CF) technologies, making prediction of users' preference based on users' previous behaviors, have become one of the most successful techniques to build modern recommender systems. Several challenging issues occur in previously proposed CF methods: (1) most CF methods ignore users' response patterns and may yield biased parameter estimation and suboptimal performance; (2) some CF methods adopt heuristic weight settings, which lacks a systematical implementation; and (3) the multinomial mixture models may weaken the computational ability of matrix factorization for generating the data matrix, thus increasing the computational cost of training. To resolve these issues, we incorporate users' response models into the probabilistic matrix factorization (PMF), a popular matrix factorization CF model, to establish the response aware probabilistic matrix factorization (RAPMF) framework. More specifically, we make the assumption on the user response as a Bernoulli distribution which is parameterized by the rating scores for the observed ratings while as a step function for the unobserved ratings. Moreover, we speed up the algorithm by a mini-batch implementation and a crafting scheduling policy. Finally, we design different experimental protocols and conduct systematical empirical evaluation on both synthetic and real-world datasets to demonstrate the merits of the proposed RAPMF and its mini-batch implementation.
1459d4d16088379c3748322ab0835f50300d9a38
Cross-domain visual data matching is one of the fundamental problems in many real-world vision tasks, e.g., matching persons across ID photos and surveillance videos. Conventional approaches to this problem usually involves two steps: i) projecting samples from different domains into a common space, and ii) computing (dis-)similarity in this space based on a certain distance. In this paper, we present a novel pairwise similarity measure that advances existing models by i) expanding traditional linear projections into affine transformations and ii) fusing affine Mahalanobis distance and Cosine similarity by a data-driven combination. Moreover, we unify our similarity measure with feature representation learning via deep convolutional neural networks. Specifically, we incorporate the similarity measure matrix into the deep architecture, enabling an end-to-end way of model optimization. We extensively evaluate our generalized similarity model in several challenging cross-domain matching tasks: person re-identification under different views and face verification over different modalities (i.e., faces from still images and videos, older and younger faces, and sketch and photo portraits). The experimental results demonstrate superior performance of our model over other state-of-the-art methods.
03a00248b7d5e2d89f5337e62c39fad277c66102
problems To understand the class of polynomial-time solvable proble ms, we must first have a formal notion of what a “problem” is. We define anbstract problemQ to be a binary relation on a set I of probleminstancesand a setS of problemsolutions. For example, an instance for SHORTEST-PATH is a triple consi sting of a graph and two vertices. A solution is a sequence of vertices in the g raph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a gra ph and two vertices with a shortest path in the graph that connects the two vertices. S ince shortest paths are not necessarily unique, a given problem instance may have mo r than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness res tricts attention to decision problems : those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instan ce setI to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH i s the problem PATH that we saw earlier. If i = 〈G,u, v,k〉 is an instance of the decision problem PATH, then PATH(i ) = 1 (yes) if a shortest path fromu to v has at mostk edges, and PATH (i ) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems , in which some value must be minimized or maximized. As we saw above, however, it is usual ly a simple matter to recast an optimization problem as a decision problem that is no harder. 1See Hopcroft and Ullman [156] or Lewis and Papadimitriou [20 4] for a thorough treatment of the Turing-machine model. 34.1 Polynomial time 973
9ac5b66036da98f2c1e62c6ca2bdcc075083ef85
f45eb5367bb9fa9a52fd4321a63308a37960e93a
Part I of this paper proposed a development process and a system platform for the development of autonomous cars based on a distributed system architecture. The proposed development methodology enabled the design and development of an autonomous car with benefits such as a reduction in computational complexity, fault-tolerant characteristics, and system modularity. In this paper (Part II), a case study of the proposed development methodology is addressed by showing the implementation process of an autonomous driving system. In order to describe the implementation process intuitively, core autonomous driving algorithms (localization, perception, planning, vehicle control, and system management) are briefly introduced and applied to the implementation of an autonomous driving system. We are able to examine the advantages of a distributed system architecture and the proposed development process by conducting a case study on the autonomous system implementation. The validity of the proposed methodology is proved through the autonomous car A1 that won the 2012 Autonomous Vehicle Competition in Korea with all missions completed.
db17a183cb220ae8473bf1b25d62d5ef6fcfeac7
Although all existing air-filled substrate integrated waveguide (AFSIW) topologies yield a substrate-independent electrical performance, they rely on dedicated, expensive, laminates to form air-filled regions that contain the electromagnetic fields. This paper proposes a novel substrate-independent AFSIW manufacturing technology, enabling straightforward integration of high-performance microwave components into a wide range of general-purpose commercially available surface materials by means of standard additive (3-D printing) or subtractive (computer numerically controlled milling/laser cutting) manufacturing processes. First, an analytical formula is derived for the effective permittivity and loss tangent of the AFSIW waveguide. This allows the designer to reduce substrate losses to levels typically encountered in high-frequency laminates. Then, several microwave components are designed and fabricated. Measurements of multiple AFSIW waveguides and a four-way power divider/combiner, both relying on a new coaxial-to-air-filled SIW transition, prove that this novel approach yields microwave components suitable for direct integration into everyday surfaces, with low insertion loss, and excellent matching and isolation over the entire [5.15–5.85] GHz band. Hence, this innovative approach paves the way for a new generation of cost-effective, high-performance, and invisibly integrated smart surface systems that efficiently exploit the area and the materials available in everyday objects.
8216673632b897ec50db06358b77f13ddd432c47
05eef019bac01e6520526510c2590cc1718f7fe6
Mobile livestreaming is now well into its third wave. From early systems such as Bambuser and Qik, to more popular apps Meerkat and Periscope, to today's integrated social streaming features in Facebook and Instagram, both technology and usage have changed dramatically. In this latest phase of livestreaming, cameras turn inward to focus on the streamer, instead of outwards on the surroundings. Teens are increasingly using these platforms to entertain friends, meet new people, and connect with others on shared interests. We studied teens' livestreaming behaviors and motivations on these new platforms through a survey completed by 2,247 American livestreamers and interviews with 20 teens, highlighting changing practices, teens' differences from the broader population, and implications for designing new livestreaming services.
08c30bbfb9ff90884f9d1f873a1eeb6bb616e761
Impossibility theorems suggest that the only efficient and strategyproof mechanisms for the problem of combinatorial assignment - e.g., assigning schedules of courses to students - are dictatorships. Dictatorships are mostly rejected as unfair: for any two agents, one chooses all their objects before the other chooses any. Any solution will involve compromise amongst efficiency, incentive and fairness considerations. This paper proposes a solution to the combinatorial assignment problem. It is developed in four steps. First, I propose two new criteria of outcome fairness, the maximin share guarantee and envy bounded by a single good, which weaken well-known criteria to accommodate indivisibilities; the criteria formalize why dictatorships are unfair. Second, I prove existence of an approximation to Competitive Equilibrium from Equal Incomes in which (i) incomes are unequal but arbitrarily close together; (ii) the market clears with error, which approaches zero in the limit and is small for realistic problems. Third, I show that this Approximate CEEI satisfies the fairness criteria. Last, I define a mechanism based on Approximate CEEI that is strategyproof for the zero-measure agents economists traditionally regard as price takers. The proposed mechanism is calibrated on real data and is compared to alternatives from theory and practice: all other known mechanisms are either manipulable by zero-measure agents or unfair ex-post, and most are both manipulable and unfair.
7d2c7748359f57c2b4227b31eca9e5f7a70a6b5c
0d1fd04c0dec97bd0b1c4deeba21b8833f792651
Design considerations and performance evaluations of a three-phase, four-switch, single-stage, isolated zero-voltage-switching (ZVS) rectifier are presented. The circuit is obtained by integrating the three-phase, two-switch, ZVS, discontinuous-current-mode (DCM), boost power-factorcorrection (PFC) rectifier, named for short the TAIPEI rectifier, with the ZVS full-bridge (FB) phase-shift dc/dc converter. The performance was evaluated on a three-phase 2.7-kW prototype designed for HVDC distribution applications with the line-toline voltage range from 180 VRMS to 264 VRMS and with a tightly regulated variable dc output voltage from 200 V to 300 V. The prototype operates with ZVS over the entire input-voltage and load-current range and achieves less than 5% input-current THD with the efficiency in the 95% range.
5417bd72d1b787ade0c485f1188189474c199f4d
We propose a novel training procedure for Generative Adversarial Networks (GANs) to improve stability and performance by using an adaptive hinge loss objective function. We estimate the appropriate hinge loss margin with the expected energy of the target distribution, and derive both a principled criterion for updating the margin and an approximate convergence measure. The resulting training procedure is simple yet robust on a diverse set of datasets. We evaluate the proposed training procedure on the task of unsupervised image generation, noting both qualitative and quantitative performance improvements.
007ee2559d4a2a8c661f4f5182899f03736682a7
The Controller-Area Network (CAN) bus protocol [1] is a bus protocol invented in 1986 by Robert Bosch GmbH, originally intended for automotive use. By now, the bus can be found in devices ranging from cars and trucks, over lightning setups to industrial looms. Due to its nature, it is a system very much focused on safety, i.e., reliability. Unfortunately, there is no build-in way to enforce security, such as encryption or authentication. In this paper, we investigate the problems associated with implementing a backward compatible message authentication protocol on the CAN bus. We show which constraints such a protocol has to meet and why this eliminates, to the best of our knowledge, all the authentication protocols published so far. Furthermore, we present a message authentication protocol, CANAuth, that meets all of the requirements set forth and does not violate any constraint of the CAN bus. Keywords—CAN bus, embedded networks, broadcast authentication, symmetric cryptography
129359a872783b7c3a82c2c9dbef75df2956d2d3
XFI is a comprehensive protection system that offers both flexible access control and fundamental integrity guarantees, at any privilege level and even for legacy code in commodity systems. For this purpose, XFI combines static analysis with inline software guards and a two-stack execution model. We have implemented XFI for Windows on the x86 architecture using binary rewriting and a simple, stand-alone verifier; the implementation's correctness depends on the verifier, but not on the rewriter. We have applied XFI to software such as device drivers and multimedia codecs. The resulting modules function safely within both kernel and user-mode address spaces, with only modest enforcement overheads.
3b938f66d03559e1144fa2ab63a3a9a076a6b48b
In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing `1-regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated as CGD) to solve the more general `1-regularized convex minimization problems, i.e., the problem of minimizing an `1-regularized convex smooth function. We establish a Q-linear convergence rate for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We propose efficient implementations of the CGD method and report numerical results for solving large-scale `1-regularized linear least squares problems arising in compressed sensing and image deconvolution as well as large-scale `1-regularized logistic regression problems for feature selection in data classification. Comparison with several state-of-the-art algorithms specifically designed for solving large-scale `1-regularized linear least squares or logistic regression problems suggests that an efficiently implemented CGD method may outperform these algorithms despite the fact that the CGD method is not specifically designed just to solve these special classes of problems.
8ad03b36ab3cba911699fe1699332c6353f227bc
According to UNESCO, education is a fundamental human right and every nation's citizens should be granted universal access with equal quality to it. Because this goal is yet to be achieved in most countries, in particular in the developing and underdeveloped countries, it is extremely important to find more effective ways to improve education. This paper presents a model based on the application of computational intelligence (data mining and data science) that leads to the development of the student's knowledge profile and that can help educators in their decision making for best orienting their students. This model also tries to establish key performance indicators to monitor objectives' achievement within individual strategic planning assembled for each student. The model uses random forest for classification and prediction, graph description for data structure visualization and recommendation systems to present relevant information to stakeholders. The results presented were built based on the real dataset obtained from a Brazilian private k-9 (elementary school). The obtained results include correlations among key data, a model to predict student performance and recommendations that were generated for the stakeholders.
500923d2513d30299350a6a0e9b84b077250dc78
Semantic similarity measures play an important role in information retrieval and information integration. Traditional approaches to modeling semantic similarity compute the semantic distance between definitions within a single ontology. This single ontology is either a domain-independent ontology or the result of the integration of existing ontologies. We present an approach to computing semantic similarity that relaxes the requirement of a single ontology and accounts for differences in the levels of explicitness and formalization of the different ontology specifications. A similarity function determines similar entity classes by using a matching process over synonym sets, semantic neighborhoods, and distinguishing features that are classified into parts, functions, and attributes. Experimental results with different ontologies indicate that the model gives good results when ontologies have complete and detailed representations of entity classes. While the combination of word matching and semantic neighborhood matching is adequate for detecting equivalent entity classes, feature matching allows us to discriminate among similar, but not necessarily equivalent, entity classes.
1c58b4c7adee37874ac96f7d859d1a51f97bf6aa
Stacked generalization is a general method of using a high-level model to combine lowerlevel models to achieve greater predictive accuracy. In this paper we address two crucial issues which have been considered to be a `black art' in classi cation tasks ever since the introduction of stacked generalization in 1992 by Wolpert: the type of generalizer that is suitable to derive the higher-level model, and the kind of attributes that should be used as its input. We nd that best results are obtained when the higher-level model combines the con dence (and not just the predictions) of the lower-level ones. We demonstrate the e ectiveness of stacked generalization for combining three di erent types of learning algorithms for classi cation tasks. We also compare the performance of stacked generalization with majority vote and published results of arcing and bagging.
017ee86aa9be09284a2e07c9200192ab3bea9671
Conditional GANs are at the forefront of natural image synthesis. The main drawback of such models is the necessity for labelled data. In this work we exploit two popular unsupervised learning techniques, adversarial training and self-supervision, to close the gap between conditional and unconditional GANs. In particular, we allow the networks to collaborate on the task of representation learning, while being adversarial with respect to the classic GAN game. The role of self-supervision is to encourage the discriminator to learn meaningful feature representations which are not forgotten during training. We test empirically both the quality of the learned image representations, and the quality of the synthesized images. Under the same conditions, the self-supervised GAN attains a similar performance to stateof-the-art conditional counterparts. Finally, we show that this approach to fully unsupervised learning can be scaled to attain an FID of 33 on unconditional IMAGENET generation.
5c695f1810951ad1bbdf7da5f736790dca240e5b
The analysis of user generated content on social media and the accurate specification of user opinions towards products and events is quite valuable to many applications. With the proliferation of Web 2.0 and the rapid growth of user-generated content on the web, approaches on aspect level sentiment analysis that yield fine grained information are of great interest. In this work, a classifier ensemble approach for aspect based sentiment analysis is presented. The approach is generic and utilizes latent dirichlet allocation to model a topic and to specify the main aspects that users address. Then, each comment is further analyzed and word dependencies that indicate the interactions between words and aspects are extracted. An ensemble classifier formulated by naive bayes, maximum entropy and support vector machines is designed to recognize the polarity of the user's comment towards each aspect. The evaluation results show sound improvement compared to individual classifiers and indicate that the ensemble system is scalable and accurate in analyzing user generated content and in specifying users' opinions and attitudes.
4f1fe957a29a2e422d4034f4510644714d33fb20
We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging. Publication info: Proceedings of EMNLP 2002, pp. 79–86.
722e2f7894a1b62e0ab09913ce9b98654733d98e
This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.
2485c98aa44131d1a2f7d1355b1e372f2bb148ad
In this paper, we describe the acquisition and contents of a large-scale Chinese face database: the CAS-PEAL face database. The goals of creating the CAS-PEAL face database include the following: 1) providing the worldwide researchers of face recognition with different sources of variations, particularly pose, expression, accessories, and lighting (PEAL), and exhaustive ground-truth information in one uniform database; 2) advancing the state-of-the-art face recognition technologies aiming at practical applications by using off-the-shelf imaging equipment and by designing normal face variations in the database; and 3) providing a large-scale face database of Mongolian. Currently, the CAS-PEAL face database contains 99 594 images of 1040 individuals (595 males and 445 females). A total of nine cameras are mounted horizontally on an arc arm to simultaneously capture images across different poses. Each subject is asked to look straight ahead, up, and down to obtain 27 images in three shots. Five facial expressions, six accessories, and 15 lighting changes are also included in the database. A selected subset of the database (CAS-PEAL-R1, containing 30 863 images of the 1040 subjects) is available to other researchers now. We discuss the evaluation protocol based on the CAS-PEAL-R1 database and present the performance of four algorithms as a baseline to do the following: 1) elementarily assess the difficulty of the database for face recognition algorithms; 2) preference evaluation results for researchers using the database; and 3) identify the strengths and weaknesses of the commonly used algorithms.
a0456c27cdd58f197032c1c8b4f304f09d4c9bc5
Ensemble methods are learning algorithms that construct a set of classi ers and then classify new data points by taking a weighted vote of their predictions The original ensemble method is Bayesian aver aging but more recent algorithms include error correcting output coding Bagging and boosting This paper reviews these methods and explains why ensembles can often perform better than any single classi er Some previous studies comparing ensemble methods are reviewed and some new experiments are presented to uncover the reasons that Adaboost does not over t rapidly
9a292e0d862debccffa04396cd5bceb5d866de18
610bc4ab4fbf7f95656b24330eb004492e63ffdf
We study the Nonnegative Matrix Factorization problem which approximates a nonnegative matrix by a low-rank factorization. This problem is particularly important in Machine Learning, and finds itself in a large number of applications. Unfortunately, the original formulation is ill-posed and NPhard. In this paper, we propose a row sparse model based on Row Entropy Minimization to solve the NMF problem under separable assumption which states that each data point is a convex combination of a few distinct data columns. We utilize the concentration of the entropy function and the `∞ norm to concentrate the energy on the least number of latent variables. We prove that under the separability assumption, our proposed model robustly recovers data columns that generate the dataset, even when the data is corrupted by noise. We empirically justify the robustness of the proposed model and show that it is significantly more robust than the state-ofthe-art separable NMF algorithms.
f829fa5686895ec831dd157f88949f79976664a7
Hierarchical Bayesian approaches play a central role in empirical marketing as they yield individual-level parameter estimates that can be used for targeting decisions. MCMC methods have been the methods of choice for estimating hierarchical Bayesian models as they are capable of providing accurate individual-level estimates. However, MCMC methods are computationally prohibitive and do not scale well when applied to massive data sets that have become common in the current era of “Big Data”. We introduce to the marketing literature a new class of Bayesian estimation techniques known as variational Bayesian (VB) inference. These methods tackle the scalability challenge via a deterministic optimization approach to approximate the posterior distribution and yield accurate estimates at a fraction of the computational cost associated with simulation-based MCMC methods. We exploit and extend recent developments in variational Bayesian inference and highlight how two VB estimation approaches – Mean-field VB (that is analogous to Gibbs sampling) for conjugate models and Fixed-form VB (which is analogous to Metropolis-Hasting) for nonconjugate models – can be effectively combined for estimating complex marketing models. We also show how recent advances in parallel computing and in stochastic optimization can be used to further enhance the speed of these VB methods. Using simulated as well as real data sets, we apply the VB approaches to several commonly used marketing models (e.g. mixed linear, logit, selection, and hierarchical ordinal logit models), and demonstrate how the VB inference is widely applicable for marketing problems.
bf8a0014ac21ba452c38d27bc7d930c265c32c60
Application of high level fusion approaches demonstrate a sequence of significant advantages in multi sensor data fusion and automotive safety fusion systems are no exception to this. High level fusion can be applied to automotive sensor networks with complementary or/and redundant field of views. The advantage of this approach is that it ensures system modularity and allows benchmarking, as it does not permit feedbacks and loops inside the processing. In this paper two specific high level data fusion approaches are described including a brief architectural and algorithmic presentation. These approaches differ mainly in their data association part: (a) track level fusion approach solves it with the point to point association with emphasis on object continuity and multidimensional assignment, and (b) grid based fusion approach that proposes a generic way to model the environment and to perform sensor data fusion. The test case for these approaches is a multi sensor equipped PReVENT/ProFusion2 truck demonstrator vehicle.
c8cc94dd21d78f4f0d07ccb61153bfb798aeef2c
4152070bd6cd28cc44bc9e54ab3e641426382e75
The problem of classification has been widely studied in the data mining, machine learning, database, and information retrieval communities with applications in a number of diverse domains, such as target marketing, medical diagnosis, news group filtering, and document organization. In this paper we will provide a survey of a wide variety of text classification
e050e89d01afffd5b854458fc48c9d6720a8072c
8bf72fb4edcb6974d3c4b0b2df63d9fd75c5dc4f
Sentiment Analysis is a widely studied research field in both research and industry, and there are different approaches for addressing sentiment analysis related tasks. Sentiment Analysis engines implement approaches spanning from lexicon-based techniques, to machine learning, or involving syntactical rules analysis. Such systems are already evaluated in international research challenges. However, Semantic Sentiment Analysis approaches, which take into account or rely also on large semantic knowledge bases and implement Semantic Web best practices, are not under specific experimental evaluation and comparison by other international challenges. Such approaches may potentially deliver higher performance, since they are also able to analyze the implicit, semantics features associated with natural language concepts. In this paper, we present the fourth edition of the Semantic Sentiment Analysis Challenge, in which systems implementing or relying on semantic features are evaluated in a competition involving large test sets, and on different sentiment tasks. Systems merely based on syntax/word-count or just lexicon-based approaches have been excluded by the evaluation. Then, we present the results of the evaluation for each task and show the winner of the most innovative approach award, that combines several knowledge bases for addressing the sentiment analysis task.
21da9ece5587df5a2ef79bf937ea19397abecfa0
This paper considers prediction and perceptual categorization as an inference problem that is solved by the brain. We assume that the brain models the world as a hierarchy or cascade of dynamical systems that encode causal structure in the sensorium. Perception is equated with the optimization or inversion of these internal models, to explain sensory data. Given a model of how sensory data are generated, we can invoke a generic approach to model inversion, based on a free energy bound on the model's evidence. The ensuing free-energy formulation furnishes equations that prescribe the process of recognition, i.e. the dynamics of neuronal activity that represent the causes of sensory input. Here, we focus on a very general model, whose hierarchical and dynamical structure enables simulated brains to recognize and predict trajectories or sequences of sensory states. We first review hierarchical dynamical models and their inversion. We then show that the brain has the necessary infrastructure to implement this inversion and illustrate this point using synthetic birds that can recognize and categorize birdsongs.
38a935e212c8e10460545b74a7888e3966c03e74
This paper addresses the problem of amodal perception of 3D object detection. The task is to not only find object localizations in the 3D world, but also estimate their physical sizes and poses, even if only parts of them are visible in the RGB-D image. Recent approaches have attempted to harness point cloud from depth channel to exploit 3D features directly in the 3D space and demonstrated the superiority over traditional 2.5D representation approaches. We revisit the amodal 3D detection problem by sticking to the 2.5D representation framework, and directly relate 2.5D visual appearance to 3D objects. We propose a novel 3D object detection system that simultaneously predicts objects 3D locations, physical sizes, and orientations in indoor scenes. Experiments on the NYUV2 dataset show our algorithm significantly outperforms the state-of-the-art and indicates 2.5D representation is capable of encoding features for 3D amodal object detection. All source code and data is on https://github.com/phoenixnn/Amodal3Det.
4d7a8836b304a1ecebee19ff297f1850e81903b4
461ebcb7a274525b8efecf7990c85994248ab433
The Routing Protocol for Low-Power and Lossy Networks (RPL) is a novel routing protocol standardized for constrained environments such as 6LoWPAN networks. Providing security in IPv6/RPL connected 6LoWPANs is challenging because the devices are connected to the untrusted Internet and are resource constrained, the communication links are lossy, and the devices use a set of novel IoT technologies such as RPL, 6LoWPAN, and CoAP/CoAPs. In this paper we provide a comprehensive analysis of IoT technologies and their new security capabilities that can be exploited by attackers or IDSs. One of the major contributions in this paper is our implementation and demonstration of well-known routing attacks against 6LoWPAN networks running RPL as a routing protocol. We implement these attacks in the RPL implementation in the Contiki operating system and demonstrate these attacks in the Cooja simulator. Furthermore, we highlight novel security features in the IPv6 protocol and exemplify the use of these features for intrusion detection in the IoT by implementing a lightweight heartbeat protocol.
5b8869bb7afa5d8d3c183dfac0d0f26c2e218593
The cache hierarchy prevalent in todays high performance processors has to be taken into account in order to design algorithms that perform well in practice. This paper advocates the adaption of external memory algorithms to this purpose. This idea and the practical issues involved are exemplified by engineering a fast priority queue suited to external memory and cached memory that is based on <i>k</i>-way merging. It improves previous external memory algorithms by constant factors crucial for transferring it to cached memory. Running in the cache hierarchy of a workstation the algorithm is at least two times faster than an optimized implementation of binary heaps and 4-ary heaps for large inputs.
1f6ba0782862ec12a5ec6d7fb608523d55b0c6ba
We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.