Query Text
stringlengths 9
8.71k
| Ranking 1
stringlengths 14
5.31k
| Ranking 2
stringlengths 11
5.31k
| Ranking 3
stringlengths 11
8.42k
| Ranking 4
stringlengths 17
8.71k
| Ranking 5
stringlengths 14
4.95k
| Ranking 6
stringlengths 14
8.42k
| Ranking 7
stringlengths 17
8.42k
| Ranking 8
stringlengths 10
5.31k
| Ranking 9
stringlengths 9
8.42k
| Ranking 10
stringlengths 9
8.42k
| Ranking 11
stringlengths 10
4.11k
| Ranking 12
stringlengths 14
8.33k
| Ranking 13
stringlengths 17
3.82k
| score_0
float64 1
1.25
| score_1
float64 0
0.25
| score_2
float64 0
0.25
| score_3
float64 0
0.24
| score_4
float64 0
0.24
| score_5
float64 0
0.24
| score_6
float64 0
0.21
| score_7
float64 0
0.1
| score_8
float64 0
0.02
| score_9
float64 0
0
| score_10
float64 0
0
| score_11
float64 0
0
| score_12
float64 0
0
| score_13
float64 0
0
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Relating equivalence and reducibility to sparse sets For various polynomial-time reducibilities r, the authors ask whether being r-reducible to a sparse set is a broader notion than being r-equivalent to a sparse set. Although distinguishing equivalence and reducibility to sparse sets, for many-one or 1-truth-table reductions, would imply that P≠NP, the authors show that for k-truth-table reductions, k⩾2, equivalence and reducibility to sparse sets provably differ. Though R. Gavalda and D. Watanabe have shown that, for any polynomial-time computable unbounded function f(·), some sets f(n)-truth-table reducible to sparse sets are not even Turing equivalent to sparse sets, the authors show that extending their result to the 2-truth-table case would provide a proof that P≠NP. Additionally, the authors study the relative power of different notions of reducibility and show that disjunctive and conjunctive truth-table reductions to sparse sets are surprisingly powerful, refuting a conjecture of K. Ko (1989) | On sets with efficient implicit membership tests This paper completely characterizes the complexity of implicit membership testing in terms of the well-known complexity class OptP, optimization polynomial time, and concludes that many complex sets have polynomial-time implicit membership tests. | Some connections between bounded query classes and non-uniform complexity It is shown that if there is a polynomial-time algorithm that tests k(n)=O(log n) points for membership in a set A by making only k(n)-1 adaptive queries to an oracle set X, then A belongs to NP/poly intersection co-NP/poly (if k(n)=O(1) then A belong to P/poly). In particular, k(n)=O(log n) queries to an NP -complete set (k(n)=O(1) queries to an NP-hard set) are more powerful than k(n)-1 queries, unless the polynomial hierarchy collapses. Similarly, if there is a small circuit that tests k(n) points for membership in A by making only k(n)-1 adaptive queries to a set X, then there is a correspondingly small circuit that decides membership in A without an oracle. An investigation is conducted of the quantitatively stronger assumption that there is a polynomial-time algorithm that tests 2k strings for membership in A by making only k queries to an oracle X, and qualitatively stronger conclusions about the structure of A are derived: A cannot be self-reducible unless A∈P, and A cannot be NP-hard unless P=NP. Similar results hold for counting classes. In addition, relationships between bounded-query computations, lowness, and the p-degrees are investigated | On truth-table reducibility to SAT and the difference hierarchy over NP We show that polynomial time truth-table reducibility via Boolean circuits to SAT is the same as log space truth-table reducibility via Boolean formulas to SAT and the same as log space Turing reducibility to SAT. In addition, we prove that a constant number of rounds of parallel queries to SAT is equivalent to one round of parallel queries. Finally, we show that the inflnite difierence hierarchy over NP is equal to ?,<SUB>2 and give an oracle oracle separating ?,<SUB>2 from the class of predicates polynomial time truth-table reducible to SAT. | More complicated questions about maxima and minima, and some closures of NP | Logic programs with classical negation | On implementing MPI-IO portably and with high performance We discuss the issues involved in implementing MPI-IO portably on multiple machines and file systems and also achieving high per- formance. One way to implement MPI-IO portably is to implement it on top of the basic Unix I/O functions (open, lseek, read, write, and close), which are themselves portable. We argue that this approach has limitations in both functionality and perfor- mance. We instead advocate an implementation approach that com- bines a large portion of portable code and a small portion of code that is optimized separately for different machines and file systems. We have used such an approach to develop a high-performance, portable MPI-IO implementation, called ROMIO. In addition to basic I/O functionality, we consider the issues of supporting other MPI-IO features, such as 64-bit file sizes, non- contiguous accesses, collective I/O, asynchronous I/O, consistency and atomicity semantics, user-supplied hints, shared file pointers, portable data representation, and file preallocation. We describe how we implemented each of these features on various machines and file systems. The machines we consider are the HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, SGI Origin2000, and networks of workstations; and the file systems we consider are HP HFS, IBM PIOFS, Intel PFS, NEC SFS, SGI XFS, NFS, and any general Unix file system (UFS). We also present our thoughts on how a file system can be de- signed to better support MPI-IO. We provide a list of features de- sired from a file system that would help in implementing MPI-IO correctly and with high performance. | Creating optimal cloud storage systems Effortless data storage ''in the cloud'' is gaining popularity for personal, enterprise and institutional data backups and synchronisation as well as for highly scalable access from software applications running on attached compute servers. The data is usually access-protected, encrypted and replicated depending on the security and scalability needs. Despite the advances in technology, the practical usefulness and longevity of cloud storage is limited in today's systems, which severely impacts the acceptance and adoption rates. Therefore, we introduce a novel cloud storage management system which optimally combines storage resources from multiple providers so that redundancy, security and other non-functional properties can be adjusted adequately to the needs of the storage service consumer. The system covers the entire storage service lifecycle from the consumer perspective. Hence, a definition of optimality is first contributed which is bound to both the architecture and the lifecycle phases. Next, an ontology for cloud storage services is presented as a prerequisite for optimality. Furthermore, we present NubiSave, a user-friendly storage controller implementation with adaptable overhead which runs on and integrates into typical consumer environments as a central part of an overall storage system. Its optimality claims are validated in real-world scenarios with several commercial online and cloud storage providers. | Nonmonotonic reasoning in the framework of situation calculus Most of the solutions proposed to the Yale shooting problem haveeither introduced new nonmonotonic reasoning methods (generally involvingtemporal priorities) or completely reformulated the domainaxioms to represent causality explicitly. This paper presents a newsolution based on the idea that since the abnormality predicate takesa situational argument, it is important for the meanings of the situationsto be held constant across the various models being compared.This is accomplished by a... | Compilability of Domain Descriptions in the Language A | Scheduling a mixed interactive and batch workload on a parallel, shared memory supercomputer | A cost-benefit scheme for high performance predictive prefetching | When Multivariate Forecasting Meets Unsupervised Feature Learning - Towards a Novel Anomaly Detection Framework for Decision Support. Many organizations adopt information technologies to make intelligent decisions during operations. Time-series data plays a crucial role in supporting such decision making processes. Though current studies on time-series based decision making provide reasonably well results, the anomaly detection essence underling most of the scenarios and the plenitude of unlabeled data are largely overlooked and left unexplored. We argue that by using multivariate forecasting and unsupervised feature learning, these two important research gaps could be filled. We carried out two experiments in this study to testify our approach and the results showed that decision support performance was significantly improved. We also proposed a novel framework to integrate the two methods so that our approach may be generalized to a larger problem domain. We discussed the advantages, the limitations and the future work of our study. Both practical and theoretical contributions were also discussed in the paper. © 2012 by the AIS/ICIS Administrative Office All rights reserved. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.24 | 0.16 | 0.034286 | 0.000303 | 0.000088 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Providing user support for interactive applications with FUSE FUSE (Formal User Interface Specification Environment) is an integrated user interface development environment that offers tool-based support for all phases of the interface design process. PLUG-IN forms one part of FUSE. Its purpose is to provide support for the end-user working with user interfaces generated by FUSE. PLUG-IN produces dynamic on-line help pages and animation sequences on the fly. On the dynamic help pages textual help for the user is displayed whereas the animation sequences are used to show how the user can interact with the application. In the presentation the architecture of FUSE is discussed. Furthermore PLUG-IN’s user guidance capabilities are demonstrated by looking at the user interface of an interactive ISDN telephone simulation. | The FUSE-System: an Integrated User Interface Design Environment With the FUSE (Formal User Interface Specification Environment)-System we pre- sent a methodology and a set of integrated tools for the automatic generation of graphical user interfaces. FUSE provides tool-based support for all phases (task-, user-, problem domain analysis, design of the logical user interface, design of user interface in a particular layout style) of the user interface development process. Based on a formal specification of dialogue- and layout guidelines, FUSE allows the automatic generation of user interfaces out of specifications of the task-, problem domain- and user-model. Moreover, the FUSE-System incorporates a component for the automatic generation of powerful help- and user guidance components. In this paper, we describe the FUSE-methodology by modelling user interfaces of an ISDN phone simulation. Furthermore, the two major components of FUSE (BOSS, PLUG-IN) are presented: The BOSS-System supports the design of the logical user interface and the formal specification of layout guidelines. PLUG-IN generates task-based help- and user guidance components. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Logic programs with classical negation | The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz. | Solving Advanced Reasoning Tasks Using Quantified Boolean Formulas We consider the compilation of different reasoning tasks into the evaluation problem of quantified boolean formulas (QBFs) as an approach to develop prototype reasoning sys- tems useful for, e.g., experimental purposes. Such a method is a natural generalization of a similar technique applied to NP-problems and has been recently proposed by other re- searchers. More specifically, we present translations of sev- eral well-known reasoning tasks from the area of nonmono- tonic reasoning into QBFs, and compare their implementa- tion in the prototype system QUIP with established NMR- provers. The results show reasonable performance, and docu- ment that the QBF approach is an attractive tool for rapid pro- totyping of experimental knowledge-representation systems. | Object Recognition from Local Scale-Invariant Features An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection.These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales.The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds. | Support-Vector Networks The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. | Improving the I/O Performance of Real-Time Database Systems with Multiple-Disk Storage Structures | Simultaneous Localization And Mapping With Sparse Extended Information Filters In this paper we describe a scalable algorithm for the simultaneous mapping and localization (SLAM) problem. SLAM is the problem of acquiring a map of a static environment with a mobile robot. The vast majority of SLAM algorithms are based on the extended Kahnan filter (EKF), In this paper we advocate an algorithm that relies on the dual of the EKE the extended information filter (EIF). We show that when represented in the information form, map posteriors are dominated by a small number of links that tie together nearby features in the map. This insight is developed into a sparse variant of the EIF called the sparse extended information filter (SEIF). SEIFs represent maps by graphical networks of features that are locally interconnected, where links represent relative information between pairs of nearby,features, as well as information about the robot's pose relative to the map. We show that all essential update equations in SEIFs can be executed in constant time, irrespective of the size of the map. We also provide empirical results obtained for a benchmark data set collected in an outdoor environment, and using a multi-robot mapping simulation. | A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | Parameterized complexity for the database theorist | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.2 | 0.008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
De-Health: All Your Online Health Information Are Belong to Us. In this paper, we study the privacy of online health data. present a novel online health data De-Anonymization (DA) framework, named De-Health. De-Health consists of two phases: Top-K DA, which identifies a candidate set for each anonymized user, and refined DA, which de-anonymizes an anonymized user to a user in its candidate set. By employing both candidate selection and DA verification schemes, De-Health significantly reduces the DA space by several orders of magnitude while achieving promising DA accuracy. Leveraging two real world online health datasets WebMD (89,393 users, 506K posts) and HealthBoards (388,398 users, 4.7M posts), we validate the efficacy of De-Health. Further, when the training data are insufficient, De-Health can still successfully de-anonymize a large portion of anonymized users. We develop the first analytical framework on the soundness and effectiveness of online health data DA. By analyzing the impact of various data features on the anonymity, we derive the conditions and probabilities for successfully de-anonymizing one user or a group of users in exact DA and Top-K DA. Our analysis is meaningful to both researchers and policy makers in facilitating the development of more effective anonymization techniques and proper privacy polices. We present a linkage attack framework which can link online health/medical information to real world people. Through a proof-of-concept attack, we link 347 out of 2805 WebMD users to real world people, and find the full names, medical/health information, birthdates, phone numbers, and other sensitive information for most of the re-identified users. This clearly illustrates the fragility of the notion of privacy of those who use online health forums. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Transaction support in read optimized and write optimized file systems This paper provides a comparative analysis of five implementations of transaction support. The first of the methods is the traditional approach of implementing transaction processing within a data manager on top of a read optimized file system. The second also assumes a traditional file system but embeds transaction support inside the file system. The third model considers a tradi- tional data manager on top of a write optimized file sys- tem. The last two models both embed transaction sup- port inside a write optimized file system, each using a different logging mechanism. Our results show that in a transaction processing environment, a write optimized file system often yields better performance than one optimized for reads. In addition, we show that file system embedded transaction managers can perform as well as data managers when transaction throughput is limited by I/O bandwidth. Finally, even when the CPU is the critical resource, the difference in performance between a data manager and an embedded system is much smaller than previous work has shown. | The DASDBS Project: Objectives, Experiences, and Future Prospects A retrospective of the Darmstadt database system project, also known as DASDBS, is presented. The project is aimed at providing data management support for advanced applications, such as geo-scientific information systems and office automation. Similar to the dichotomy of RSS and RDS in System R, a layered architectural approach was pursued: a storage management kernel serves as the lowest common denominator of the requirements of the various applications classes, and a family of application-oriented front-ends provides semantically richer functions on top of the kernel. The lessons that were learned from building the DASDBS system are discussed. Particular emphasis is placed on the following issues: the role of nested relations, the experiences with using object buffers for coupling the system with the programming-language environment and the learning process in implementing multilevel transactions. | Incremental recovery in main memory database systems Recovery activities, like checkpointing and restart, in traditional database management systems are performed in a quiescent state where no transactions are active. This approach impairs the performance of online transaction processing systems, especially when a large volatile memory is used. An incremental scheme for performing recovery in main memory database systems (MMDBs), in parallel with transaction execution, is presented. A page-based incremental restart algorithm that enables the resumption of transaction processing as soon as the system is up is proposed. Pages are recovered individually and according to the demands of the post-crash transactions. A method for propagating updates from main memory to the backup database on disk is also provided. The emphasis is on decoupling the I/O activities related to the propagation to disk from the forward transaction execution in memory. The authors also construct a high-level recovery manager based on operation logging on top of the page-based algorithms. The proposed algorithms are motivated by the characteristics of large MMDBs, and exploit the technology of nonvolatile RAM. | Microprocessor technology trends The rapid pace of advancement of microprocessor technology has shown no sign of diminishing, and this pace is expected to continue in the future. Recent trends in such areas as silicon technology, processor architecture and implementation, system organization, buses, higher levels of integration, self-testing, caches, coprocessors, and fault tolerance are discussed, and expectations for further ad... | Read Optimized File System Designs: A Performance Evaluation This paper presents a performance comparison of several file system allocation policies. The file systems are designed to provide high bandwidth between disks and main memory by taking advantage of parallelism in an underlying disk array, catering to large units of transfer, and minimizing the bandwidth dedicated to the transfer of meta data. All of the file systems described use a mul- tiblock allocation strategy which allows both large and small files to be allocated efficiently. Simulation results show that these multiblock policies result in systems that are able to utilize a large percentage of the underlying disk bandwidth; more than 90% in sequential cases. As general purpose systems are called upon to support more data intensive applications such as databases and super- computing, these policies offer an opportunity to provide superior performance to a larger class of users. | Parallelism in relational data base systems: architectural issues and design approaches With current systems, some important complex queries may take days to complete because of: (1) the volume of data to be processed, (2) limited aggregate resources. Introducing parallelism addresses the first problem. Cheaper, but powerful computing resources solve the second problem. According to a survey by Brodie,1 only 10% of computerized data is in data bases. This is an argument for both more variety and volume of data to be moved into data base systems. We conjecture that the primary reasons for this low percentage are that data base management systems (DBMSs) still need to provide far greater functionality and improved performance compared to a combination of application programs and file systems. This paper addresses the issues and solutions relating to intraquery parallelism in a relational DBMS supporting SQL. Instead of focussing only on a few algorithms for a subset of the problems, we provide a broad framework for the study of the numerous issues that need to be addressed in supporting parallelism efficiently and flexibly. We also discuss the impact that parallelization of complex queries has on short transactions which have stringent response time constraints. The pros and cons of the shared nothing, shared disks and shared everything architectures for parallelism are enumerated. The impact of parallelism on a number of components of an industrial-strength DBMS are pointed out. The different stages of query processing during which parallelism may be gainfully employed are identified. The interactions between parallelism and the traditional systems' pipelining technique are analyzed. Finally, the performance implications of parallelizing a specific complex query are studied. This gives us a range of sample points for different parameters of a parallel system architecture, namely, I/O and communication bandwidth as a function of aggregate MIPS. | Declustering using error correcting codes The problem examined is to distribute a binary Cartesian product file on multiple disks to maximize the parallelism for partial match queries. Cartesian product files appear as a result of some secondary key access methods, such as the multiattribute hashing [10], the grid file [6] etc.. For the binary case, the problem is reduced into grouping the 2n binary strings on n bits in m groups of unsimilar strings. The main idea proposed in this paper is to group the strings such that the group forms an Error Correcting Code (ECC). This construction guarantees that the strings of a given group will have large Hamming distances, i.e., they will differ in many bit positions. Intuitively, this should result into good declustering. We briefly mention previous heuristics for declustering, we describe how exactly to build a declustering scheme using an ECC, and we prove a theorem that gives a necessary condition for our method to be optimal. Analytical results show that our method is superior to older heuristics, and that it is very close to the theoretical (non-tight) bound. | The Multics Input/Output system An I/0 system has been implemented in the Multics system that facilitates dynamic switching of I/0 devices. This switching is accomplished by providing a general interface for all I/O devices that allows all equivalent operations on different devices to be expressed in the same way. Also particular devices are referenced by symbolic names and the binding of names to devices can be dynamically modified. Available I/0 operations range from a set of basic I/0 calls that require almost no knowledge of the I/O System or the I/0 device being used to fully general calls that permit one to take full advantage of all features of an I/O device but require considerable knowledge of the I/0 System and the device. The I/O System is described and some popular applications of it, illustrating these features, are presented. | A Dynamic Approach for Efficient TCP Buffer Allocation Abstract The paper proposes local and global optimization schemes for ecient,TCP buer,allocation in an HTTP server. The proposed local optimization scheme dynamically adjusts the TCP send-buer size to the connection and server characteristics. The global optimization scheme divides a certain amount of buer,space among all active TCP connections. These schemes are of increasing importance due to the large scale of TCP connection characteristics. The schemes are compared to the static allocation policy employed by a typical HTTP server, and shown to achieve considerable improvement to server performance and better utilization of its resources. The schemes require only minor code changes and only at the server. Keywords: HTTP, server performance, TCP send-buer. An early version of this paper was presented in IC3N’98, The 7’th International Conference on Computer | The TickerTAIP parallel RAID architecture Traditional disk arrays have a centralized architecture, with a single controller through which all requests flow. Such a controller is a single point of failure, and its performance limits the maximum size that the array can grow to. We describe here TickerTAIP, a parallel architecture for disk arrays that distributed the controller functions across several loosely-coupled processors. The result is better scalability, fault tolerance, and flexibility.
This paper presents the TickerTAIP architecture and an evaluation of its behavior. We demonstrate the feasibility by an existence proof; describe a family of distributed algorithms for calculating RAID parity; discuss techniques for establishing request atomicity, sequencing and recovery; and evaluate the performance of the TickerTAIP design in both absolute terms and by comparison to a centralized RAID implementation. We conclude that the TickerTAIP architectural approach is feasible, useful, and effective. | Power-aware storage cache management Reducing energy consumption is an important issue for data centers. Among the various components of a data center, storage is one of the biggest energy consumers. Previous studies have shown that the average idle period for a server disk in a data center is very small compared to the time taken to spin down and spin up. This significantly limits the effectiveness of disk power management schemes. This article proposes several power-aware storage cache management algorithms that provide more opportunities for the underlying disk power management schemes to save energy. More specifically, we present an offline energy-optimal cache replacement algorithm using dynamic programming, which minimizes the disk energy consumption. We also present an offline power-aware greedy algorithm that is more energy-efficient than Belady's offline algorithm (which minimizes cache misses only). We also propose two online power-aware algorithms, PA-LRU and PB-LRU. Simulation results with both a real system and synthetic workloads show that, compared to LRU, our online algorithms can save up to 22 percent more disk energy and provide up to 64 percent better average response time. We have also investigated the effects of four storage cache write policies on disk energy consumption. | Restricted Boltzmann machines for collaborative filtering Most of the existing approaches to collaborative filtering cannot handle very large data sets. In this paper we show how a class of two-layer undirected graphical models, called Restricted Boltzmann Machines (RBM's), can be used to model tabular data, such as user's ratings of movies. We present efficient learning and inference procedures for this class of models and demonstrate that RBM's can be successfully applied to the Netflix data set, containing over 100 million user/movie ratings. We also show that RBM's slightly outperform carefully-tuned SVD models. When the predictions of multiple RBM models and multiple SVD models are linearly combined, we achieve an error rate that is well over 6% better than the score of Netflix's own system. | SemEval-2012 task 1: English Lexical Simplification We describe the English Lexical Simplification task at SemEval-2012. This is the first time such a shared task has been organized and its goal is to provide a framework for the evaluation of systems for lexical simplification and foster research on context-aware lexical simplification approaches. The task requires that annotators and systems rank a number of alternative substitutes -- all deemed adequate -- for a target word in context, according to how "simple" these substitutes are. The notion of simplicity is biased towards non-native speakers of English. Out of nine participating systems, the best scoring ones combine context-dependent and context-independent information, with the strongest individual contribution given by the frequency of the substitute regardless of its context. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1.0589 | 0.041305 | 0.040049 | 0.040049 | 0.026726 | 0.015603 | 0.008529 | 0.00253 | 0.000046 | 0.000015 | 0.000003 | 0 | 0 | 0 |
Towards unsupervised physical activity recognition using smartphone accelerometers The development of smartphones equipped with accelerometers gives a promising way for researchers to accurately recognize an individual's physical activity in order to better understand the relationship between physical activity and health. However, a huge challenge for such sensor-based activity recognition task is the collection of annotated or labelled training data. In this work, we employ an unsupervised method for recognizing physical activities using smartphone accelerometers. Features are extracted from the raw acceleration data collected by smartphones, then an unsupervised classification method called MCODE is used for activity recognition. We evaluate the effectiveness of our method on three real-world datasets, i.e., a public dataset of daily living activities and two datasets of sports activities of race walking and basketball playing collected by ourselves, and we find our method outperforms other existing methods. The results show that our method is viable to recognize physical activities using smartphone accelerometers. | On-line deep learning method for action recognition. In this paper an unsupervised on-line deep learning algorithm for action recognition in video sequences is proposed. Deep learning models capable of deriving spatio-temporal data have been proposed in the past with remarkable results, yet, they are mostly restricted to building features from a short window length. The model presented here, on the other hand, considers the entire sample sequence and extracts the description in a frame-by-frame manner. Each computational node of the proposed paradigm forms clusters and computes point representatives, respectively. Subsequently, a first-order transition matrix stores and continuously updates the successive transitions among the clusters. Both the spatial and temporal information are concurrently treated by the Viterbi Algorithm, which maximizes a criterion based upon (a) the temporal transitions and (b) the similarity of the respective input sequence with the cluster representatives. The derived Viterbi path is the node’s output, whereas the concatenation of nine vicinal such paths constitute the input to the corresponding upper level node. The engagement of ART and the Viterbi Algorithm in a Deep learning architecture, here, for the first time, leads to a substantially different approach for action recognition. Compared with other deep learning methodologies, in most cases, it is shown to outperform them, in terms of classification accuracy. | A Framework For Selecting Deep Learning Hyper-Parameters Recent research has found that deep learning architectures show significant improvements over traditional shallow algorithms when mining high dimensional datasets. When the choice of algorithm employed, hyper-parameter setting, number of hidden layers and nodes within a layer are combined, the identification of an optimal configuration can be a lengthy process. Our work provides a framework for building deep learning architectures via a stepwise approach, together with an evaluation methodology to quickly identify poorly performing architectural configurations. Using a dataset with high dimensionality, we illustrate how different architectures perform and how one algorithm configuration can provide input for fine-tuning more complex models. | A Novel Feature Extraction Method for Scene Recognition Based on Centered Convolutional Restricted Boltzmann Machines. Scene recognition is an important research topic in computer vision, while feature extraction is a key step of scene recognition. Although classical Restricted Boltzmann Machines (RBM) can efficiently represent complicated data, it is hard to handle large images due to its complexity in computation. In this paper, a novel feature extraction method, named Centered Convolutional Restricted Boltzmann Machines (CCRBM), is proposed for scene recognition. The proposed model improves the Convolutional Restricted Boltzmann Machines (CRBM) by introducing centered factors in its learning strategy to reduce the source of instabilities. First, the visible units of the network are redefined using centered factors. Then, the hidden units are learned with a modified energy function by utilizing a distribution function, and the visible units are reconstructed using the learned hidden units. In order to achieve better generative ability, the Centered Convolutional Deep Belief Networks (CCDBN) is trained in a greedy layer-wise way. Finally, a softmax regression is incorporated for scene recognition. Extensive experimental evaluations on the datasets of natural scenes, MIT-indoor scenes, MIT-Places 205, SUN 397, Caltech 101, CIFAR-10, and NORB show that the proposed approach performs better than its counterparts in terms of stability, generalization, and discrimination. The CCDBN model is more suitable for natural scene image recognition by virtue of convolutional property. | High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning. High-dimensional problem domains pose significant challenges for anomaly detection. The presence of irrelevant features can conceal the presence of anomalies. This problem, known as the ‘curse of dimensionality’, is an obstacle for many anomaly detection techniques. Building a robust anomaly detection model for use in high-dimensional spaces requires the combination of an unsupervised feature extractor and an anomaly detector. While one-class support vector machines are effective at producing decision surfaces from well-behaved feature vectors, they can be inefficient at modelling the variation in large, high-dimensional datasets. Architectures such as deep belief networks (DBNs) are a promising technique for learning robust features. We present a hybrid model where an unsupervised DBN is trained to extract generic underlying features, and a one-class SVM is trained from the features learned by the DBN. Since a linear kernel can be substituted for nonlinear ones in our hybrid model without loss of accuracy, our model is scalable and computationally efficient. The experimental results show that our proposed model yields comparable anomaly detection performance with a deep autoencoder, while reducing its training and testing time by a factor of 3 and 1000, respectively. | A restricted Boltzmann machine based two-lead electrocardiography classification An restricted Boltzmann machine learning algorithm were proposed in the two-lead heart beat classification problem. ECG classification is a complex pattern recognition problem. The unsupervised learning algorithm of restricted Boltzmann machine is ideal in mining the massive unlabelled ECG wave beats collected in the heart healthcare monitoring applications. A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs. In this paper a deep belief network was constructed and the RBM based algorithm was used in the classification problem. Under the recommended twelve classes by the ANSI/AAMI EC57: 1998/(R)2008 standard as the waveform labels, the algorithm was evaluated on the two-lead ECG dataset of MIT-BIH and gets the performance with accuracy of 98.829%. The proposed algorithm performed well in the two-lead ECG classification problem, which could be generalized to multi-lead unsupervised ECG classification or detection problems. | Learning methods for generic object recognition with invariance to pose and lighting We assess the applicability of several popular learning methods for the problem of recognizing generic visual categories with invariance to pose, lighting, and surrounding clutter. A large dataset comprising stereo image pairs of 50 uniform-colored toys under 36 azimuths, 9 elevations, and 6 lighting conditions was collected (for a total of 194,400 individual images). The objects were 10 instances of 5 generic categories: four-legged animals, human figures, airplanes, trucks, and cars. Five instances of each category were used for training, and the other five for testing. Low-resolution grayscale images of the objects with various amounts of variability and surrounding clutter were used for training and testing. Nearest Neighbor methods, Support Vector Machines, and Convolutional Networks, operating on raw pixels or on PCA-derived features were tested. Test error rates for unseen object instances placed on uniform backgrounds were around 13% for SVM and 7% for Convolutional Nets. On a segmentation/recognition task with highly cluttered images, SVM proved impractical, while Convolutional nets yielded 16/7% error. A real-time version of the system was implemented that can detect and classify objects in natural scenes at around 10 frames per second. | Evaluating collaborative filtering recommender systems Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated. | Mach: A New Kernel Foundation for UNIX Development Mach is a multiprocessor operating system kernel and environment under development at Carnegie Mellon University. Mach provides a new foundation for UNIX development that spans networks of uniprocessors and multiprocessors. This paper describes Mach and the motivations that led to its design. Also described are some of the details of its implemen- tation and current status. | Two components of an action language Some of the recent work on representing action makes use of high‐level action languages. In this paper we show that an action language can be represented as the sum of two distinct parts: an “action description language” and an “action query language.” A set of propositions in an action description language describes the effects of actions on states. Mathematically, it defines a transition system of the kind familiar from the theory of finite automata. An action query language serves for expressing properties of paths in a given transition system. We define the general concepts of a transition system, of an action description language and of an action query language, give a series of examples of languages of both kinds, and show how to combine a description language and a query language into one. This construction makes it possible to design the two components of an action language independently, which leads to the simplification and clarification of the theory of actions. | Caching Hints in Distributed Systems Caching reduces the average cost of retrieving data by amortizing the lookup cost over several references to the data. Problems with maintaining strong cache consistency in a distributed system can be avoided by treating cached information as hints. A new approach to managing caches of hints suggests maintaining a minimum level of cache accuracy, rather than maximizing the cache hit ratio, in order to guarantee performance improvements. The desired accuracy is based on the ratio of lookup costs to the costs of detecting and recovering from invalid cache entries. Cache entries are aged so that they get purged when their estimated accuracy falls below the desired level. The age thresholds are dictated solely by clients' accuracy requirements instead of being suggested by data storage servers or system administrators. | Computational Politics: Electoral Systems This paper discusses three computation-related results in the study of electoral systems: 1. Determining the winner in Lewis Carroll's 1876 electoral system is complete for parallel access to NP [22]. 2. For any electoral system that is neutral, consistent, and Condorcet, determining the winner is complete for parallel access to NP [21]. 3. For each census in US history, a simulated annealing algorithm yields provably fairer (in a mathematically rigorous sense) congressional apportionments than any of the classic algorithms--even the algorithm currently used in the United States [24]. | A Genetic Approach to Planning in Heterogeneous Computing Environments Planning is an artificial intelligence problem with a wide range of real-world applications. Genetic algorithms, neural networks, and simulated annealing are heuristic search methods often used to solve complex optimization problems. In this paper, we propose a genetic approach to planning in the context of workflow management and process coordination on aheterogenous grid. We report results for two planning problems, the Towers of Hanoi and the Sliding-tile puzzle. | Privacy-preserving restricted boltzmann machine. With the arrival of the big data era, it is predicted that distributed data mining will lead to an information technology revolution. To motivate different institutes to collaborate with each other, the crucial issue is to eliminate their concerns regarding data privacy. In this paper, we propose a privacy-preserving method for training a restricted boltzmann machine (RBM). The RBM can be got without revealing their private data to each other when using our privacy-preserving method. We provide a correctness and efficiency analysis of our algorithms. The comparative experiment shows that the accuracy is very close to the original RBM model. | 1.1 | 0.1 | 0.1 | 0.05 | 0.033333 | 0.02 | 0.001818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Comparison of knowledge sharing strategies in a parallel QBF solver. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | Parameterized complexity for the database theorist | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A Theoretical Analysis of Robust Coding over Noisy Overcomplete Channels Biological sensory systems are faced with the problem of encoding a high-fidelity sensory signal with a population of noisy, low-fidelity neu- rons. This problem can be expressed in information theoretic terms as coding and transmitting a multi-dimensional, analog signal over a set of noisy channels. Previously, we have shown that robust, overcomplete codes can be learned by minimizing the reconstruction error with a con- straint on the channel capacity. Here, we present a theoretical analysis that characterizes the optimal linear coder and decoder for one- and two- dimensional data. The analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides a number of important insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions to achieve ro- bustness. We also report numerical solutions for robust coding of high- dimensional image data and show that these codes are substantially more robust compared against other image codes such as ICA and wavelets. | Rational Kernels: Theory and Algorithms Many classification algorithms were originally designed for fixed-size vectors. Recent applications in text and speech processing and computational biology require however the analysis of variable-length sequences and more generally weighted automata. An approach widely used in statistical learning techniques such as Support Vector Machines (SVMs) is that of kernel methods, due to their computational efficiency in high-dimensional feature spaces. We introduce a general family of kernels based on weighted transducers or rational relations, rational kernels , that extend kernel methods to the analysis of variable-length sequences or more generally weighted automata. We show that rational kernels can be computed efficiently using a general algorithm of composition of weighted transducers and a general single-source shortest-distance algorithm. Not all rational kernels are positive definite and symmetric (PDS), or equivalently verify the Mercer condition, a condition that guarantees the convergence of training for discriminant classification algorithms such as SVMs. We present several theoretical results related to PDS rational kernels. We show that under some general conditions these kernels are closed under sum, product, or Kleene-closure and give a general method for constructing a PDS rational kernel from an arbitrary transducer defined on some non-idempotent semirings. We give the proof of several characterization results that can be used to guide the design of PDS rational kernels. We also show that some commonly used string kernels or similarity measures such as the edit-distance, the convolution kernels of Haussler, and some string kernels used in the context of computational biology are specific instances of rational kernels. Our results include the proof that the edit-distance over a non-trivial alphabet is not negative definite, which, to the best of our knowledge, was never stated or proved before. Rational kernels can be combined with SVMs to form efficient and powerful techniques for a variety of classification tasks in text and speech processing, or computational biology. We describe examples of general families of PDS rational kernels that are useful in many of these applications and report the result of our experiments illustrating the use of rational kernels in several difficult large-vocabulary spoken-dialog classification tasks based on deployed spoken-dialog systems. Our results show that rational kernels are easy to design and implement and lead to substantial improvements of the classification accuracy. | An Information Measure For Classification | Self Supervised Boosting Boosting algorithms and successful applications thereof abound for clas- sification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a ran- dom field model by training them to improve classification performance between the data and an equal-sized sample of "negative examples" gen- erated from the model's current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative examples from the model's current Boltzmann distribution. Next, a fea- ture is trained to improve classification performance between data and negative examples. Finally, a coefficient is learned which determines the importance of this feature relative to ones already in the pool. Negative examples only need to be generated once to learn each new feature. The validity of the approach is demonstrated on binary digits and continuous synthetic data. | A two-layer ICA-like model estimated by score matching Capturing regularities in high-dimensional data is an important problem in machine learning and signal processing. Here we present a statistical model that learns a nonlinear representation from the data that reflects abstract, invariant properties of the signal without making requirements about the kind of signal that can be processed. The model has a hierarchy of two layers, with the first layer broadly corresponding to Independent Component Analysis (ICA) and a second layer to represent higher order structure. We estimate the model using the mathematical framework of Score Matching (SM), a novel method for the estimation of non-normalized statistical models. The model incorporates a squaring nonlinearity, which we propose to be suitable for forming a higher-order code of invariances. Additionally the squaring can be viewed as modelling subspaces to capture residual dependencies, which linear models cannot capture. | Regularization and Semi-Supervised Learning on Large Graphs We consider the problem of labeling a partially labeled graph. This setting may arise in a number of situations from survey sampling to information retrieval to pattern recognition in manifold settings. It is also of potential practical importance, when the data is abundant, but labeling is expensive or requires human assistance. Our approach develops a framework for regularization on such graphs. The algorithms are very simple and involve solving a single, usually sparse, system of linear equations. Using the notion of algorithmic stability, we derive bounds on the generalization error and relate it to structural invariants of the graph. Some experimental results testing the performance of the regularization algorithm and the usefulness of the generalization bound are presented. | Energy-based models for sparse overcomplete representations We present a new way of extending independent components analysis(ICA) to overcomplete representations. In contrast to the causalgenerative extensions of ICA which maintain marginal independenceof sources, we define features as deterministic(linear) functions of the inputs. This assumption results inmarginal dependencies among the features, but conditionalindependence of the features given the inputs. By assigningenergies to the features a probability distribution over the inputstates is defined through the Boltzmann distribution. Freeparameters of this model are trained using the contrastivedivergence objective (Hinton, 2002). When the number of features isequal to the number of input dimensions this energy-based modelreduces to noiseless ICA and we show experimentally that theproposed learning algorithm is able to perform blind sourceseparation on speech data. In additional experiments we trainovercomplete energy-based models to extract features from variousstandard data-sets containing speech, natural images, hand-writtendigits and faces. | Unsupervised Learning of Image Transformations We describe a probabilistic model for learning rich, dis- tributed representations of image transformations. The ba- sic model is defined as a gated conditional random field that is trained to predict transformations of its inputs using a factorial set of latent variables. Inference in the model con- sists in extracting the transformation, given a pair of im- ages, and can be performed exactly and efficiently. We show that, when trained on natural videos, the model develops domain specific motion features, in the form of fields of locally transformed edge filters. When trained on affine, or more general, transformations of still images, the model develops codes for these transformations, and can subsequently perform recognition tasks that are invari- ant under these transformations. It can also fantasize new transformations on previously unseen images. We describe several variations of the basic model and provide experi- mental results that demonstrate its applicability to a variety of tasks. | Training restricted Boltzmann machines using approximations to the likelihood gradient A new algorithm for training Restricted Boltzmann Machines is introduced. The algorithm, named Persistent Contrastive Divergence, is different from the standard Contrastive Divergence algorithms in that it aims to draw samples from almost exactly the model distribution. It is compared to some standard Contrastive Divergence and Pseudo-Likelihood algorithms on the tasks of modeling and classifying various types of data. The Persistent Contrastive Divergence algorithm outperforms the other algorithms, and is equally fast and simple. | Kernel Methods for Deep Learning. We introduce a new family of positive-definite kernel functions that mimic the computation in large, multilayer neural nets. These kernel functions can be used in shallow architectures, such as support vector machines (SVMs), or in deep kernel-based architectures that we call multilayer kernel machines (MKMs). We evaluate SVMs and MKMs with these kernel functions on problems designed to illustrate the advantages of deep architectures. On several problems, we obtain better results than previous, leading benchmarks from both SVMs with Gaussian kernels as well as deep belief nets. | A multi-task learning formulation for predicting disease progression Alzheimer's Disease (AD), the most common type of dementia, is a severe neurodegenerative disorder. Identifying markers that can track the progress of the disease has recently received increasing attentions in AD research. A definitive diagnosis of AD requires autopsy confirmation, thus many clinical/cognitive measures including Mini Mental State Examination (MMSE) and Alzheimer's Disease Assessment Scale cognitive subscale (ADAS-Cog) have been designed to evaluate the cognitive status of the patients and used as important criteria for clinical diagnosis of probable AD. In this paper, we propose a multi-task learning formulation for predicting the disease progression measured by the cognitive scores and selecting markers predictive of the progression. Specifically, we formulate the prediction problem as a multi-task regression problem by considering the prediction at each time point as a task. We capture the intrinsic relatedness among different tasks by a temporal group Lasso regularizer. The regularizer consists of two components including an L2,1-norm penalty on the regression weight vectors, which ensures that a small subset of features will be selected for the regression models at all time points, and a temporal smoothness term which ensures a small deviation between two regression models at successive time points. We have performed extensive evaluations using various types of data at the baseline from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database for predicting the future MMSE and ADAS-Cog scores. Our experimental studies demonstrate the effectiveness of the proposed algorithm for capturing the progression trend and the cross-sectional group differences of AD severity. Results also show that most markers selected by the proposed algorithm are consistent with findings from existing cross-sectional studies. | Actions and specificity A solution to the problem of speciflcity in a resource{oriented deductive approach to actions and change is presented. Speciflcity originates in the problem of overloading methods in object oriented frameworks but can be observed in general applications of actions and change in logic. We give a uniform solution to the problem of speciflcity culminating in a completed equational logic program with an equational theory. We show the soundness and completeness of SLDENF{resolution, ie. SLD{resolution augmented by negation{as{failure and by an equational theory, wrt the completed program. Finally, the expressiveness of our approach for performing general reasoning about actions, change, and causality is demonstrated. | On efficient computation of variable MUSes In this paper we address the following problem: given an unsatisfiable CNF formula ${\mathcal{F}}$, find a minimal subset of variables of ${\mathcal{F}}$ that constitutes the set of variables in some unsatisfiable core of ${\mathcal{F}}$. This problem, known as variable MUS (VMUS) computation problem, captures the need to reduce the number of variables that appear in unsatisfiable cores. Previous work on computation of VMUSes proposed a number of algorithms for solving the problem. However, the proposed algorithms lack all of the important optimization techniques that have been recently developed in the context of (clausal) MUS computation. We show that these optimization techniques can be adopted for VMUS computation problem and result in multiple orders magnitude speed-ups on industrial application benchmarks. In addition, we demonstrate that in practice VMUSes can often be computed faster than MUSes, even when state-of-the-art optimizations are used in both contexts. | Super-Solutions: Succinctly Representing Solutions in Abductive Annotated Probabilistic Temporal Logic Annotated Probabilistic Temporal (APT) logic programs are a form of logic programs that allow users to state (or systems to automatically learn) rules of the form “formula G becomes true Δt time units after formula F became true with ℓ to u% probability.” In this article, we deal with abductive reasoning in APT logic: given an APT logic program Π, a set of formulas H that can be “added” to Π, and a (temporal) goal g, is there a subset S of H such that Π ∪ S is consistent and entails the goal g? In general, there are many different solutions to the problem and some of them can be highly repetitive, differing only in some unimportant temporal aspects. We propose a compact representation called super-solutions that succinctly represent sets of such solutions. Super-solutions are compact, but lossless representations of sets of such solutions. We study the complexity of existence of basic, super-, and maximal super-solutions as well as check if a set is a solution/super-solution/maximal super-solution. We then leverage a geometric characterization of the problem to suggest a set of pruning strategies and interesting properties that can be leveraged to make the search of basic and super-solutions more efficient. We propose correct sequential algorithms to find solutions and super-solutions. In addition, we develop parallel algorithms to find basic and super-solutions. | 1.028903 | 0.028592 | 0.028592 | 0.028592 | 0.028592 | 0.01438 | 0.00783 | 0.00321 | 0.00039 | 0.000019 | 0.000001 | 0 | 0 | 0 |
An infinitary encoding of temporal equilibrium logic This paper studies the relation between two recent extensions of propositional Equilibrium Logic, a well-known logical characterisation of Answer Set Programming. In particular, we show how Temporal Equilibrium Logic, which introduces modal operators as those typically handled in Linear-Time Temporal Logic (LTL), can be encoded into Infinitary Equilibrium Logic, a recent formalisation that allows the use of infinite conjunctions and disjunctions. We prove the correctness of this encoding and, as an application, we further use it to show that the semantics of the temporal logic programming formalism called TEMPLOG is subsumed by Temporal Equilibrium Logic. | Ramification and causality in a modal action logic The paper presents a logic for action theory based on a modal language, where modalities represent actions. The frame problem is tackled by using a nonmonotonic formalism which maximizes persistency assumptions. The problem of ramification is tackled by introducing a modal causality operator which is used to represent causal rules. Assumptions on the value of fluents in the initial state allow rea... | Formalizing Action and Change in Modal Logic I: the frame problem We present the basic framework of a logic of actions and plans defined in terms of modal logic combined with a notion of dependence. The latter is used as a weak causal connection between actions and literals. In this paper we focus on the frame problem and demonstrate how it can be solved in our framework in a simple and monotonic way. We give the semantics, and associate an axiomatics and a deci... | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Logic programs with classical negation | Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days. | Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented. | Indexing By Latent Semantic Analysis | Synchronized Disk Interleaving A group of disks may be interleaved to speed up data transfers in a manner analogous to the speedup achieved by main memory interleaving. Conventional disks may be used for interleaving by spreading data across disks and by treating multiple disks as if they were a single one. Furthermore, the rotation of the interleaved disks may be synchronized to simplify control and also to optimize performance. In addition, check- sums may be placed on separate check-sum disks in order to improve reliability. In this paper, we study synchronized disk interleaving as a high-performance mass storage system architecture. The advantages and limitations of the proposed disk interleaving scheme are analyzed using the M/G/1 queueing model and compared to the conventional disk access mechanism. | Downward Separation Fails Catastrophically for Limited Nondeterminism Classes The $\beta$ hierarchy consists of classes $\beta_k={\rm NP}[logkn]\subseteq {\rm NP}$. Unlike collapses in the polynomial hierarchy and the Boolean hierarchy, collapses in the $\beta$ hierarchy do not seem to translate up, nor does closure under complement seem to cause the hierarchy to collapse. For any consistent set of collapses and separations of levels of the hierarchy that respects ${\rm P} = \beta_1\subseteq \beta_2\subseteq \cdots \subseteq {\rm NP}$, we can construct an oracle relative to which those collapses and separations hold; at the same time we can make distinct levels of the hierarchy closed under computation or not, as we wish. To give two relatively tame examples: for any $k \geq 1$, we construct an oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} \neq \beta_{k+2} \neq \cdots \] and another oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} = {\rm PSPACE}. \] We also construct an oracle relative to which $\beta_{2k} = \beta_{2k+1} \neq \beta_{2k+2}$ for all k. | A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system. | iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings. | When Multivariate Forecasting Meets Unsupervised Feature Learning - Towards a Novel Anomaly Detection Framework for Decision Support. Many organizations adopt information technologies to make intelligent decisions during operations. Time-series data plays a crucial role in supporting such decision making processes. Though current studies on time-series based decision making provide reasonably well results, the anomaly detection essence underling most of the scenarios and the plenitude of unlabeled data are largely overlooked and left unexplored. We argue that by using multivariate forecasting and unsupervised feature learning, these two important research gaps could be filled. We carried out two experiments in this study to testify our approach and the results showed that decision support performance was significantly improved. We also proposed a novel framework to integrate the two methods so that our approach may be generalized to a larger problem domain. We discussed the advantages, the limitations and the future work of our study. Both practical and theoretical contributions were also discussed in the paper. © 2012 by the AIS/ICIS Administrative Office All rights reserved. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1.2 | 0.022222 | 0.014286 | 0.001274 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Adding knowledge to the action description language A We introduce Ak an extension of the action description language A (Gelfond & Lifschitz 1993) to handle actions which affect knowledge. We use sensing actions to increase an agent's knowledge of the world and non-deterministic actions to remove knowledge. We include complex plans involving conditionals and loops in our query language for hypothetical reasoning. Finally, we present a translation of descriptions in Ak to epistemic logic programs. | A Realistic Experiment in Knowledge Representation in Open Event Calculus: Protocol Specification This paper presents one of the first realistic experiments in the use of Event Calculusin Open Logic Programming: the specification of a process protocol. The specification taskinvolves most of the common complications of temporal reasoning: the representation of contextdependent actions, of preconditions and ramifications of actions, the modelling of systemfaults, and most of all, the representation of uncertainty of actions. As the underlying language,the Open Logic Programming... | E-RES: A System for Reasoning about Actions, Events and Observations E-RES is a system that implements the Language E, a logic for reasoning about narratives of action oc- currences and observations. E's semantics is model- theoretic, but this implementation is based on a sound and complete reformulation of E in terms of argumen- tation, and uses general computational techniques of argumentation frameworks. The system derives scep- tical non-monotonic consequences of a given reformu- lated theory which exactly correspond to consequences entailed by E's model-theory. The computation relies on a complimentary ability of the system to derive cred- ulous non-monotonic consequences together with a set of supporting assumptions which is sucient for the (credulous) conclusion to hold. E-RES allows theories to contain general action laws, statements about ac- tion occurrences, observations and statements of ram- ications (or universal laws). It is able to derive con- sequences both forward and backward in time. This paper gives a short overview of the theoretical basis of E-RES and illustrates its use on a variety of examples. Currently, E-RES is being extended so that the system can be used for planning. | Minimal Knowledge Approach to Reasoning about Actions and Sensing We present an autoepistemic approach for reasoningabout actions in the presence of incompleteinformation and sensing. Specifically, weintroduce a logical formalism that combines avery expressive logic of programs, the modalmu-calculus, with a minimal knowledge modality. | Probabilistic Situation Calculus In this article we propose a Probabilistic Situation Calculus logical language to represent and reason with knowledge about dynamic worlds in which actions have uncertain effects. Uncertain effects are modeled by dividing an action into two subparts: a deterministic (agent produced) input and a probabilistic reaction (produced by nature). We assume that the probabilities of the reactions have known distributions.Our logical language is an extension to Situation Calculae in the style proposed by Raymond Reiter. There are three aspects to this work. First, we extend the language in order to accommodate the necessary distinctions (e.g., the separation of actions into inputs and reactions). Second, we develop the notion of Randomly Reactive Automata in order to specify the semantics of our Probabilistic Situation Calculus. Finally, we develop a reasoning system in MATHEMATICA capable of performing temporal projection in the Probabilistic Situation Calculus. | Logic, Knowledge Representation, and Bayesian Decision Theory In this paper I give a brief overview of recent work on uncertainty in AI, and relate it to logical representations. Bayesian decision theory and logic are both normative frameworks for reasoning that emphasize different aspects of intelligent reasoning. Belief networks (Bayesian networks) are representations of independence that form the basis for understanding much of the recent work on reasoning under uncertainty, evidential and causal reasoning, decision analysis, dynamical systems, optimal control, reinforcement learning and Bayesian learning. The independent choice logic provides a bridge between logical representations and belief networks that lets us understand these other representations and their relationship to logic and shows how they can extended to first-order rule-based representations. This paper discusses what the representations of uncertainty can bring to the computational logic community and what the computational logic community can bring to those studying reasoning under uncertainty. | Extending Graphplan to handle uncertainty and sensing actions If an agent does not have complete information about the world-state, it must reason about al- ternative possible states of the world and con- sider whether any of its actions can reduce the uncertainty. Agents controlled by a contingent planner seek to generate a robust plan, that accounts for and handles all eventualities, in advance of execution. Thus a contingent plan may include sensing actions which gather in- formation that is later used to select between different plan branches. Unfortunately, previ- ous contingent planners suffered defects such as confused semantics, incompleteness, and ineffi- ciency. In this paper we describe SGP, a de- scendant of Graphplan that solves contingent planning problems. SGP distinguishes between actions that sense the value of an unknown proposition from those that change its value. SGP does not suffer from the forms of incom- pleteness displayed by CNLP and Cassandra. Furthermore, SGP is relatively fast. | Formalizing Action and Change in Modal Logic I: the frame problem We present the basic framework of a logic of actions and plans defined in terms of modal logic combined with a notion of dependence. The latter is used as a weak causal connection between actions and literals. In this paper we focus on the frame problem and demonstrate how it can be solved in our framework in a simple and monotonic way. We give the semantics, and associate an axiomatics and a deci... | Possibilistic Planning: Representation and Complexity A possibilistic approach of planning under uncertainty has been developed recently. It applies to problems in which the initial state is partially known and the actions have graded nondeterministic effects, some being more possible (normal) than the others. The uncertainty on states and effects of actions is represented by possibility distributions. The paper first recalls the essence of possibilitic planning concerning the representational aspects and the plan generation algorithms used to... | A goal-oriented approach to computing the well-founded semantics Global SLS resolution is an ideal procedural semantics for the well-founded semantics. We present a more effective variant of global SLS resolution, called XOLDTNF resolution, which incorporates simple mechanisms for loop detection and handling. Termination is guaranteed for all programs with the bounded-term-size property. We establish the soundness and (search space) completeness of XOLDTNF resolution. An implementation of XOLDTNF resolution in Prolog is available via FTP. | Differentiable Sparse Coding Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a laplacian (L1) that promotes sparsity. We show how smoother priors can pre- serve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Additionally, we show how to calculate the derivative of the MAP estimate effi- ciently with implicit differentiation. One prior that can be differentiated this way is KL-regularization. We demonstrate its effectiveness on a wide variety of appli- cations, and find that online optimization of the parameters of the KL-regularized model can significantly improve prediction performance. | Worlds to die for We last had an "open problems" column eighteen months ago [Hem94]. It contained seven problems. Of the seven, one has since been resolved (at least insofar as one can resolve the problem without outright collapsing complexity classes) in an exciting FOCS paper by Cai and Sivakumar ([CS95], see also [Ogi95b,CNS95]), and for another I received a proof via email unfortunately followed quickly by another email retracting the proof. Overall score:Mysteries of Complexity Theory: 6.Theoretical Computer Scientists:1.If you go to Atlantic City, you know which side to bet on! But be of good cheer. This issue's column contains a new list of open problems (though some favorites from the old list have stowed away here too). And to stack the deck in favor of theoretical computer scientists, the problems are posed quite obliquely. Rather than asking you to prove "X," many of the problems (e.g., Problems 2, 4, 5, 6, and 7) just ask you to show that "In some oracle world, X." Sound easy? Dig in! And if your attempt to find a world where X holds becomes too frustrating, don't hesitate to go for the real glory --- by proving that X fails in the real world (and every relativized world)! | Reordering Query Execution in Tertiary Memory Databases In the relational model the order of fetching data does not affect query correctness. This flexibility is exploited in query optimization by statically reordering data accesses. However, once a query is optimized, it is executed in a fixed order in most systems, with the result that data requests are made in a fixed order. Only limited forms of runtime reordering can be provided by low-level device managers. More aggressive reordering strategies are essential in scenarios where the latency of access to data objects varies widely and dynamically, as in tertiary devices. This paper presents such a strategy. Our key innovation is to exploit dynamic reordering to match execution order to the optimal data fetch order, in all parts of the plan-tree. To demonstrate: the practicality of our approach and the impact of our optimizations, we report on a prototype implementation based on Postgres. Using our system, typical I/O cost for queries on tertiary memory databases is as much as an order of magnitude smaller than with conventional query processing techniques. | Improving Citation Polarity Classification With Product Reviews Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification. | 1.011876 | 0.010295 | 0.010295 | 0.008703 | 0.005274 | 0.003705 | 0.001859 | 0.000788 | 0.000159 | 0.000024 | 0 | 0 | 0 | 0 |
Exploring Editorial Content Optimization for Websites through a Statistical Ranking of Articles. This study describes an online content optimization ranking system for editorial teams. Research on online content optimization has either focused on developing serving schemes for large online news and aggregation websites or complex algorithms for user generated content-based websites. An unexplored area in this domain was the development of a content optimization technique for smaller, editorially-focused sites that creates a long-term brand value that inspires visitors to engage with websites. The results of a study on 276 online articles and associated web metrics show that images within an article, the number of times visitors viewed an article and if they reached the article through a search engine were significant positive predictors of the time they spent with articles. However, the percentage of single-page visits to an article and the number of times visitors clicked a link outside of an article were significant negative predictors for the time they spent with articles. These factors were utilized to develop a statistical rank for content optimization, which shows some initial promising results. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Parallel software development using an object-oriented modelling technique A significant amount of interest is currently being shown in the relationship between the paradigms of object-orientation and concurrency. This stems from the observation that objects display a great deal of concurrent behaviour in the way they can co-exist with one another. As a result, much research effort has gone into exploiting this relationship, primarily in the development of programming languages specifically aimed at producing parallel software. However, the exploitation of the object-oriented paradigm in the analysis and design of parallel software has not seen the same level of interest. This work presents an investigation into adopting object-oriented approaches during the analysis and design of parallel software by taking a well established object modelling method (OMT) and extending it using the PARSE process graph notation to account for the added dimensions of concurrency. This hybrid method is analysed and discussed by way of the development and implementation of a common parallel software scenario. The results of this exercise show that adopting an object-oriented view at the analysis and design stage of development can benefit the production of such a parallel software solution. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Exploring Gate-Limited Analytical Models for High Performance Network Storage Servers | Parameterized complexity for the database theorist | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Proceedings of the 24th International Conference on Supercomputing, 2010, Tsukuba, Ibaraki, Japan, June 2-4, 2010 | CAAD BLASTn: Accelerated NCBI BLASTn with FPGA prefiltering The canonical bioinformatics application is determining the biological similarity of a new sequence (protein or DNA) with respect to databases of known sequences. The BLAST algorithm is used for the vast majority of these searches. Of the various BLAST implementations, the one published by NCBI is a recognized standard. In previous work we described FPGA acceleration of the protein version of NCBI BLAST (BLASTp) using our TreeBLAST-based filter. Here we apply this filter to NCBI BLASTn, the DNA version. We show the modifications to the structures of the filtering components needed to handle DNA, as opposed to protein, sequences. The design has been implemented on an Altera Stratix III family chip. Our experimental results show that the speedup is greater than 12x and the accuracy is 100%. | Using video-oriented instructions to speed up sequence comparison. Motivation: This document presents an implementation of the well-known Smith-Waterman algorithm for comparison of proteic and nucleic sequences, using specialized video instructions. These instructions, SIMD-like in their design, make possible parallelization of the algorithm at the instruction level. Results: Benchmarks on an ULTRA SPARC running at 167 MHz show a speed-up factor of Two compared to the same algorithm implemented with integer instructions on the same machine. Performance reaches over 18 million matrix cells per second on a single processor, giving to our knowledge the fastest implementation of the Smith-Waterman algorithm on a workstation. The accelerated procedure was introduced in LASSAP-a LArge Scale Sequence compArison Package software developed at INRIA-which handles parallelism at higher level. On a SUN Enterprise 6000 sewer with 12 processors, a speed of neatly 200 million matrix cells per second has been obtained. A sequence of length 300 amino acids is scanned against SWISSPROT R33 (18 531 385 residues) in 29 s. This procedure is not restricted to databank scanning. It applies to all cases handled by LASSAP (intra- and inter-bank comparisons, Z-score computation, etc.). | Compressed indexing and local alignment of DNA Motivation: Recent experimental studies on compressed indexes (BWT, CSA, FM-index) have confirmed their practicality for indexing very long strings such as the human genome in the main memory. For example, a BWT index for the human genome (with about 3 billion characters) occupies just around 1 G bytes. However, these indexes are designed for exact pattern matching, which is too stringent for biological applications. The demand is often on finding local alignments (pairs of similar substrings with gaps allowed). Without indexing, one can use dynamic programming to find all the local alignments between a text T and a pattern P in O(|T||P|) time, but this would be too slow when the text is of genome scale (e.g. aligning a gene with the human genome would take tens to hundreds of hours). In practice, biologists use heuristic-based software such as BLAST, which is very efficient but does not guarantee to find all local alignments. Results: In this article, we show how to build a software called BWT-SW that exploits a BWT index of a text T to speed up the dynamic programming for finding all local alignments. Experiments reveal that BWT-SW is very efficient (e.g. aligning a pattern of length 3 000 with the human genome takes less than a minute). We have also analyzed BWT-SW mathematically for a simpler similarity model (with gaps disallowed), and we show that the expected running time is O(|T|0.628|P|) for random strings. As far as we know, BWT-SW is the first practical tool that can find all local alignments. Yet BWT-SW is not meant to be a replacement of BLAST, as BLAST is still several times faster than BWT-SW for long patterns and BLAST is indeed accurate enough in most cases (we have used BWT-SW to check against the accuracy of BLAST and found that only rarely BLAST would miss some significant alignments). Availability: www.cs.hku.hk/~ckwong3/bwtsw Contact: [email protected] | Biological information signal processor The computation requirements for mapping and sequencing the human genome might soon exceed the capability of any existing supercomputer. The systolic array processor presented in this paper, called biological information signal processor (BISP), has the capability to satisfy the current and anticipated future computational requirements for performing sequence comparisons based on the T.F. Smith and M.S. Waterman algorithm (1981) as extended by M.S. Waterman and M. Eggert (1987). The BISP can conduct the most time consuming sequence comparison functions, establishing both global and local relationships between two sequences. A modified Smith and Waterman algorithm is presented in this paper for efficient VLSI implementation. Methods are developed to reduce the BISP systolic array I/O bandwidth problem by reporting only the statistical significant results. Estimated performance of the BISP is compared with several different computer architectures | A high performance fpga-based implementation of position specific iterated blast We present in this paper the first reported FPGA implementation of the Position Specific Iterated BLAST (PSI-BLAST) algorithm. The latter is a heuristic biological sequence alignment algorithm that is widely used in the bioinformatics and computational biology world in order to detect weak homologs. The architecture of our FPGA implementation is parameterized in terms of sequence lengths, scoring matrix, gap penalties and cut-off and threshold values. It is composed of various blmocks each of which performs one step of the algorithm in parallel. This results in high performance implementations, which easily outperform equivalent software implementations by one order of magnitude or more. Furthermore, the core was captured in an FPGA-platform-independent language, namely the Handel-C language, to which no specific resource inference or placement constraints were applied. This makes our core portable across different FPGA families and architectures. | Finding motifs using random projections. The DNA motif discovery problem abstracts the task of discovering short, conserved sites in genomic DNA. Pevzner and Sze recently described a precise combinatorial formulation of motif discovery that motivates the following algorithmic challenge: find twenty planted occurrences of a motif of length fifteen in roughly twelve kilobases of genomic sequence, where each occurrence of the motif differs from its consensus in four randomly chosen positions. Such "subtle" motifs, though statistically highly significant, expose a weakness in existing motif-finding algorithms, which typically fail to discover them. Pevzner and Sze introduced new algorithms to solve their (15,4)-motif challenge, but these methods do not scale efficiently to more difficult problems in the same family, such as the (14,4)-, (16,5)-, and (18,6)-motif problems. We introduce a novel motif-discovery algorithm, PROJECTION, designed to enhance the performance of existing motif finders using random projections of the input's substrings. Experiments on synthetic data demonstrate that PROJECTION remedies the weakness observed in existing algorithms, typically solving the difficult (14,4)-, (16,5)-, and (18,6)-motif problems. Our algorithm is robust to nonuniform background sequence distributions and scales to larger amounts of sequence than that specified in the original challenge. A probabilistic estimate suggests that related motif-finding problems that PROJECTION fails to solve are in all likelihood inherently intractable. We also test the performance of our algorithm on realistic biological examples, including transcription factor binding sites in eukaryotes and ribosome binding sites in prokaryotes. | A Run-Time Reconfigurable System for Gene-Sequence Searching Advances in the field of bio-technology has led to anever increasing demand for computational resourcesto rapidly search large databases of genetic information.Databases with billions of data elements are routinelycompared and searched for matching and near-matchingpatterns. In this paper we present a systemdeveloped to search DNA sequence data using runtimereconfiguration of Field Programmable Gate Arrays(FPGAs). The system provides an order of magnitudeincrease in performance while reducing hardwarecomplexity when compared to existing commercial systems. | Parallel database systems: the future of high performance database systems | Closure properties of constraints Many combinatorial search problems can be expressed as “constraint satisfaction problems” and this class of problems is known to be NP-complete in general. In this paper, we investigate the subclasses that arise from restricting the possible constraint types. We first show that any set of constraints that does not give rise to an NP-complete class of problems must satisfy a certain type of algebraic closure condition. We then investigate all the different possible forms of this algebraic closure property, and establish which of these are sufficient to ensure tractability. As examples, we show that all known classes of tractable constraints over finite domains can be characterized by such an algebraic closure property. Finally, we describe a simple computational procedure that can be used to determine the closure properties of a given set of constraints. This procedure involves solving a particular constraint satisfaction problem, which we call an “indicator problem.” | An overview of MetaMap: historical perspective and recent advances. MetaMap is a widely available program providing access to the concepts in the unified medical language system (UMLS) Metathesaurus from biomedical text. This study reports on MetaMap's evolution over more than a decade, concentrating on those features arising out of the research needs of the biomedical informatics community both within and outside of the National Library of Medicine. Such features include the detection of author-defined acronyms/abbreviations, the ability to browse the Metathesaurus for concepts even tenuously related to input text, the detection of negation in situations in which the polarity of predications is important, word sense disambiguation (WSD), and various technical and algorithmic features. Near-term plans for MetaMap development include the incorporation of chemical name recognition and enhanced WSD. | Polynomial-time recognition of minimal unsatisfiable formulas with fixed clause-variable difference A formula (in conjunctive normal form) is said to be minimal unsatisfiable if it is unsatisfiable and deleting any clause makes it satisfiable. The deficiency of a formula is the difference of the number of clauses and the number of variables. It is known that every minimal unsatisfiable formula has positive deficiency. Until recently, polynomial-time algorithms were known to recognize minimal unsatisfiable formulas with deficiency 1 and 2. We state an algorithm which recognizes minimal unsatisfiable formulas with any fixed deficiency in polynomial time. | Introduction: progress in formal commonsense reasoning This special issue consists largely of expanded and revised versions of selected papers of the Fifth International Symposium on Logical Formalizations of Commonsense Reasoning (Common Sense 2001), held at New York University in May 2001., The Common Sense Symposia, first organized in 1991 by John McCarthy and held roughly biannually since, are dedicated to exploring the development of formal commonsense theories using mathematical logic. Commonsense reasoning is a central part of human behavior; no real intelligence is possible without it. Thus, the development of systems that exhibit commonsense behavior is a central goal of Artificial Intelligence. It has proven to be more difficult to create systems that are capable of commonsense reasoning than systems that can solve "hard" reasoning problems. There are chess-playing programs that beat champions [5] and expert systems that assist in clinical diagnosis [32], but no programs that reason about how far one must bend over to put on one's socks. Part of the difficulty is the all-encompassing aspect of commonsense reasoning: any problem one looks at touches on many different types of knowledge. Moreover, in contrast to expert knowledge which is usually explicit, most commonsense knowledge is implicit. One of the prerequisites to developing commonsense reasoning systems is making this knowledge explicit. John McCarthy [25] first noted this need and suggested using formal logic to encode commonsense knowledge and reasoning. In the ensuing decades, there has been much research on the representation of knowledge in formal logic and on inference algorithms to | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.203802 | 0.203802 | 0.067953 | 0.051784 | 0.018019 | 0.001246 | 0.000879 | 0.000188 | 0 | 0 | 0 | 0 | 0 | 0 |
Infinite Ensemble for Image Clustering Image clustering has been a critical preprocessing step for vision tasks, e.g., visual concept discovery, content-based image retrieval. Conventional image clustering methods use handcraft visual descriptors as basic features via K-means, or build the graph within spectral clustering. Recently, representation learning with deep structure shows appealing performance in unsupervised feature pre-treatment. However, few studies have discussed how to deploy deep representation learning to image clustering problems, especially the unified framework which integrates both representation learning and ensemble clustering for efficient image clustering still remains void. In addition, even though it is widely recognized that with the increasing number of basic partitions, ensemble clustering gets better performance and lower variances, the best number of basic partitions for a given data set is a pending problem. In light of this, we propose the Infinite Ensemble Clustering (IEC), which incorporates the power of deep representation and ensemble clustering in a one-step framework to fuse infinite basic partitions. Generally speaking, a set of basic partitions is firstly generated from the image data, then by converting the basic partitions to the 1-of-K codings, we link the marginalized auto-encoder to the infinite ensemble clustering with i.i.d. basic partitions, which can be approached by the closed-form solutions, finally we follow the layer-wise training procedure and feed the concatenated deep features to K-means for final clustering. Extensive experiments on diverse vision data sets with different levels of visual descriptors demonstrate both the time efficiency and superior performance of IEC compared to the state-of-the-art ensemble clustering and deep clustering methods. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Research On Scheduling Scheme For Hadoop Clusters In this paper, we import a prefetching mechanism into MapReduce model while retaining compatibility with the native Hadoop. Given a data-intensive application running on a Hadoop MapReduce cluster, our approach estimates the execution time of each task and adaptively preloads an amount of data to the memory before the new task is assigned to the computing node. We implement a predictive schedule and prefetching (PSP) mechanism, which is integrated into the native MapReduce runtime system. We also evaluate performance on a 10-node cluster using two popular benchmarks-grep and wordcount. The PSP mechanism reduces the execution time of grep and wordcount up to 28 % with an average of 19%. Moreover, the PSP model increases the overall throughput and improves the I/O utilization. Because of the limitation of length, we did not present the experiment result detail in this paper. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Multiview Physician-Specific Attributes Fusion for Health Seeking. Community-based health services have risen as important online resources for resolving users health concerns. Despite the value, the gap between what health seekers with specific health needs and what busy physicians with specific attitudes and expertise can offer is being widened. To bridge this gap, we present a question routing scheme that is able to connect health seekers to the right physicia... | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Probabilistic polynomial time is closed under parity reductions We show that probabilistic polynomial time (PP) isclosed under polynomial-time parity reductions. Ascorollaries, we show that several complexity classesare contained in PP.ResultsComparing the power of various computational paradigmsis a central concern of computational complexitytheory. In this paper, we study the power ofPP by asking whether it is closed downwards underpolynomial-time parity reductions.Gill [5] showed that NP is contained in PP. Russo[15] showed that the class... | Structure of Complexity Classes: Separations, Collapses, and Completeness During the last few years, unprecedented progress has been made in structural complexity theory; class inclusions and relativized separations were discovered, and hierarchies collapsed. We survey this progress, highlighting the central role of counting techniques. We also present a new result whose proof demonstrates the power of combinatorial arguments: there is a relativized world in which UP has no Turing complete sets. | The complexity of promise problems with applications to public-key cryptography | Query Order We study the effect of query order on computational power and show that ${\rm P}^{{\rm BH}_j[1]:{\rm BH}_k[1]}$\allowbreak---the languages computable via a polynomial-time machine given one query to the $j$th level of the boolean hierarchy followed by one query to the $k$th level of the boolean hierarchy---equals ${\rm R}_{{j+2k-1}{\scriptsize\mbox{-tt}}}^{p}({\rm NP})$ if $j$ is even and $k$ is odd and equals ${\rm R}_{{j+2k}{\scriptsize\mbox{-tt}}}^{p}({\rm NP})$ otherwise. Thus unless the polynomial hierarchy collapses it holds that, for each $1\leq j \leq k$, ${\rm P}^{{\rm BH}_j[1]:{\rm BH}_k[1]} = {\rm P}^{{\rm BH}_k[1]:{\rm BH}_j [1]} \iff (j=k) \lor (j\mbox{ is even}\, \land k=j+1)$. We extend our analysis to apply to more general query classes. | On truth-table reducibility to SAT We show that polynomial time truth-table reducibility via Boolean circuits to SAT is the same as logspace truth-table reducibility via Boolean formulas to SAT and the same as logspace Turing reducibility to SAT. In addition, we prove that a constant number of rounds of parallel queries to SAT is equivalent to one round of parallel queries. We give an oracle relative to which ¢p2 is not equal to the class of predicates polynomial time truth-table reducible to SAT. | The strong exponential hierarchy collapses The polynomial hierarchy, composed of the levels P, NP, PNP, NPNP, etc., plays a central role in classifying the complexity of feasible computations. It is not known whether the polynomial hierarchy collapses.We resolve the question of collapse for an exponential-time analogue of the polynomial-time hierarchy. Composed of the levels E (i.e., ⋓c DTIME[2cn]), NE, PNE, NPNE, etc., the strong exponential hierarchy collapses to its &Dgr;2 level. E ≠ PNE = NPNE ⋓ NPNPNE ⋓ ··· Our proof stresses the use of partial census information and the exploitation of nondeterminism.Extending our techniques, we also derive new quantitative relativization results. We show that if the (weak) exponential hierarchy's &Dgr;j+1 and &Sgr;j+1 levels, respectively E&Sgr;pj and NE&Sgr;pj, do separate, this is due to the large number of queries NE makes to its &Sgr;pj database.1Our technique provide a successful method of proving the collapse of certain complexity classes. | On the Boolean closure of NP By endowing usual nondeterministic Turing machines with new modes of acceptance we introduce new machines whose computational power is bounded by that of alternating Turing machines making only one alternation. The polynomial time classes of these machines are exactly the levels of the Boolean closure of NP which can be defined in a natural way. For all these classes natural problems can be found which are proved to be
m
P
-complete in these classes. | Facets of the knapsack polytope Abstract A necessary and sufficient condition is given for an inequality with coefficients 0 or 1 to define a facet of the knapsack polytope, i.e., of the convex hull of 0–1 points satisfying a given linear inequality. A sufficient condition is also established for a larger class of inequalities (with coefficients not restricted to 0 and 1) to define a facet for the same polytope, and a procedure is given for generating all facets in the above two classes. The procedure can be viewed as a way of generating cutting planes for 0–1 programs. | The Minimization Problem for Boolean Formulas More than a quarter of a century ago, the question of the complexity of determining whether a given Boolean formula is minimal motivated Meyer and Stockmeyer to define the polynomial hierarchy. This problem (in the standard formalized version---that of Garey and Johnson) has been known for decades to be coNP-hard and in NPNP, and yet no one had even been able to establish (many-one) NP-hardness. In this paper, we show that and more: The problem in fact is (many-one) hard for parallel access to NP. | Engineering benchmarks for planning: the domains used in the deterministic part of IPC-4 In a field of research about general reasoning mechanisms, it is essential to have appropriate benchmarks. Ideally, the benchmarks should reflect possible applications of the developed technology. In AI Planning, researchers more and more tend to draw their testing examples from the benchmark collections used in the International Planning Competition (IPC). In the organization of (the deterministic part of) the fourth IPC, IPC-4, the authors therefore invested significant effort to create a useful set of benchmarks. They come from five different (potential) real-world applications of planning: airport ground traffic control, oil derivative transportation in pipeline networks, model-checking safety properties, power supply restoration, and UMTS call setup. Adapting and preparing such an application for use as a benchmark in the IPC involves, at the time, inevitable (often drastic) simplifications, as well as careful choice between, and engineering of, domain encodings. For the first time in the IPC, we used compilations to formulate complex domain features in simple languages such as STRIPS, rather than just dropping the more interesting problem constraints in the simpler language subsets. The article explains and discusses the five application domains and their adaptation to form the PDDL test suites used in IPC-4. We summarize known theoretical results on structural properties of the domains, regarding their computational complexity and provable properties of their topology under the h+ function (an idealized version of the relaxed plan heuristic). We present new (empirical) results illuminating properties such as the quality of the most wide-spread heuristic functions (planning graph, serial planning graph, and relaxed plan), the growth of propositional representations over instance size, and the number of actions available to achieve each fact; we discuss these data in conjunction with the best results achieved by the different kinds of planners participating in IPC-4. | Formal methods for the validation of automotive product configuration data Constraint-based reasoning is often used to represent and find solutions to configuration problems. In the field of constraint satisfaction, the major focus has been on finding solutions to difficult problems. However, many real-life configuration problems, ... | Arc Consistency Algorithms via Iterations of Subsumed Functions We provide here an extension of a general framework introduced in [Apt99b, Apt99c] that allows to explain several local consistency algorithms in a systematic way. In this framework we proceed in two steps. First, we introduce a generic iteration algorithm on partial orderings and prove its correctness. Then we instantiate this algorithm with specific partial orderings and functions to obtain specific local consistency algorithms. In particular, using the notion of subsumption, we show that the algorithms AC4, HAC-4, AC-5 and our extension HAC-5 of AC-5 are instances of a single generic algorithm. | BDD-based decision procedures for the modal logic K We describe BDD-based decision procedures for the modal log ic K. Our approach is inspired by the automata-theoretic approach, but we avoi d explicit automata construction. Instead, we compute certain fixpoints of a set of types—whichcan be viewed as an on-the-fly emptiness of the automaton. We use BDDs to represent and mani pulate such type sets, and investigate different kinds of representations as well as a"level-based" representation scheme. The latter turns out to speed up construction and reduce memo ry consumption considerably. We also study the effect of formula simplification on our deci sion procedures. To proof the viability of our approach, we compare our approach with a rep resentative selection of other approaches, including a translation ofK to QBF. Our results indicate that the BDD-based approach dominates for modally heavy formulae, while searc h-based approaches dominate for propositionally heavy formulae. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1.036415 | 0.03743 | 0.012675 | 0.008416 | 0.005659 | 0.003311 | 0.001369 | 0.000076 | 0.000009 | 0 | 0 | 0 | 0 | 0 |
Assumption-Based Planning: Generating Plans and Explanations under Incomplete Knowledge. | Logic programs with classical negation | The computational complexity of propositional STRIPS planning I present several computational complexity results for propositional STRIPS planning, i.e.,STRIPS planning restricted to ground formulas. Different planning problems can be definedby restricting the type of formulas, placing limits on the number of pre- and postconditions,by restricting negation in pre- and postconditions, and by requiring optimal plans. For thesetypes of restrictions, I show when planning is tractable (polynomial) and intractable (NPhard). In general, it is... | Extended stable semantics for normal and disjunctive programs | The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz. | Solving Advanced Reasoning Tasks Using Quantified Boolean Formulas We consider the compilation of different reasoning tasks into the evaluation problem of quantified boolean formulas (QBFs) as an approach to develop prototype reasoning sys- tems useful for, e.g., experimental purposes. Such a method is a natural generalization of a similar technique applied to NP-problems and has been recently proposed by other re- searchers. More specifically, we present translations of sev- eral well-known reasoning tasks from the area of nonmono- tonic reasoning into QBFs, and compare their implementa- tion in the prototype system QUIP with established NMR- provers. The results show reasonable performance, and docu- ment that the QBF approach is an attractive tool for rapid pro- totyping of experimental knowledge-representation systems. | Object Recognition from Local Scale-Invariant Features An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection.These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales.The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds. | Support-Vector Networks The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. | Improving the I/O Performance of Real-Time Database Systems with Multiple-Disk Storage Structures | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.2 | 0.005 | 0.000459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Complexity of Concurrent Temporal Planning We consider the problem of temporal planning in which a given goal is reached by taking a number of actions which may temporally overlap and interfere, and the interference may be essential for reaching the goals. We formalize a general temporal planning problem, show that its plan existence problem is EXPSPACE-complete, and give conditions under which it is reducible to classical planning and is therefore only PSPACE-complete. Our results are the first to show that temporal planning can be computationally more complex than classical planning. They also show how and why a very large and important fragment of temporal PDDL is reducible to classical planning. | Compilation of a High-level Temporal Planning Language into PDDL 2.1 An important aspect of any automatic planner is the language in which the user expresses problem instances. A rich language is an advantage for the user, whereas a simple language is an advantage for the programmer who must write a program to solve all planning problems expressible in the language. Considering the temporal planning language PDDL 2.1 as a low-level language, we show how to automatically compile a much richer language into PDDL 2.1. The worst-case complexity of this transformation is quadratic. Our high-level language allows the user to declare time-points and impose simple temporal constraints between them. Conditions and effects can be imposed at time-points, over intervals and over sliding intervals within fixed intervals. Non-instantaneous transitions can also be modelled. | Relaxation of Temporal Planning Problems Relaxation is ubiquitous in the practical resolution of combinatorial problems. If a valid relaxation of an instance has no solution then the original instance has no solution. A tractable relaxation can be built and solved in polynomial time. The most obvious application is the efficient detection of certain unsolvable instances. We review existing relaxation techniques in temporal planning and propose an alternative relaxation inspired by a tractable class of temporal planning problems. Our approach is orthogonal to relaxations based on the ignore-all-deletes approach used in non-temporal planning. We show that our relaxation can even be applied to non-temporal problems, and can also be used to extend a tractable class of temporal planning problems. | Fair LTL synthesis for non-deterministic systems using strong cyclic planners We consider the problem of planning in environments where the state is fully observable, actions have non-deterministic effects, and plans must generate infinite state trajectories for achieving a large class of LTL goals. More formally, we focus on the control synthesis problem under the assumption that the LTL formula to be realized can be mapped into a deterministic Büchi automaton. We show that by assuming that action nondeterminism is fair, namely that infinite executions of a nondeterministic action in the same state yield each possible successor state an infinite number of times, the (fair) synthesis problem can be reduced to a standard strong cyclic planning task over reachability goals. Since strong cyclic planners are built on top of efficient classical planners, the transformation reduces the non-deterministic, fully observable, temporally extended planning task into the solution of classical planning problems. A number of experiments are reported showing the potential benefits of this approach to synthesis in comparison with state-of-the-art symbolic methods. | A weighted CSP approach to cost-optimal planning For planning to come of age, plans must be judged by a measure of quality, such as the total cost of actions. This paper describes an optimal-cost planner which guarantees global optimality whenever the planning problem has a solution. We code the extraction of an optimal plan, from a planning graph with a fixed number k of levels, as a weighted constraint satisfaction problem (WCSP). The specific structure of the resulting WCSP means that a state-of-the-art exhaustive solver was able to find an optimal plan in planning graphs containing several thousand nodes. Thorough experimental investigations demonstrated that using the planning graph in optimal planning is a practical possibility for problems of moderate size, although not competitive, in terms of computation time, with optimal state-space-search planners. Our general conclusion is, therefore, that planning-graph-based optimal planning is not the most efficient method for cost-optimal planning. Nonetheless, the notions of indispensable (sets of) actions and too-costly actions introduced in this paper have various potential applications in optimal planning. These actions can be detected very rapidly by analysis of the relaxed planning graph. | Monotone temporal planning: tractability, extensions and applications This paper describes a polynomially-solvable class of temporal planning problems. Polynomiality follows from two assumptions. Firstly, by supposing that each sub-goal fluent can be established by at most one action, we can quickly determine which actions are necessary in any plan. Secondly, the monotonicity of sub-goal fluents allows us to express planning as an instance of STP≠ (Simple Temporal Problem with difference constraints). This class includes temporally-expressive problems requiring the concurrent execution of actions, with potential applications in the chemical, pharmaceutical and construction industries. We also show that any (temporal) planning problem has a monotone relaxation which can lead to the polynomial-time detection of its unsolvability in certain cases. Indeed we show that our relaxation is orthogonal to relaxations based on the ignore-deletes approach used in classical planning since it preserves deletes and can also exploit temporal information. | Blocks World revisited Contemporary AI shows a healthy trend away from artificial problems towards real-world applications. Less healthy, however, is the fashionable disparagement of “toy” domains: when properly approached, these domains can at the very least support meaningful systematic experiments, and allow features relevant to many kinds of reasoning to be abstracted and studied. A major reason why they have fallen into disrepute is that superficial understanding of them has resulted in poor experimental methodology and consequent failure to extract useful information. This paper presents a sustained investigation of one such toy: the (in)famous Blocks World planning problem, and provides the level of understanding required for its effective use as a benchmark. Our results include methods for generating random problems for systematic experimentation, the best domain-specific planning algorithms against which AI planners can be compared, and observations establishing the average plan quality of near-optimal methods. We also study the distribution of hard/easy instances, and identify the structure that AI planners must be able to exploit in order to approach Blocks World successfully. | The computational complexity of propositional STRIPS planning I present several computational complexity results for propositional STRIPS planning, i.e.,STRIPS planning restricted to ground formulas. Different planning problems can be definedby restricting the type of formulas, placing limits on the number of pre- and postconditions,by restricting negation in pre- and postconditions, and by requiring optimal plans. For thesetypes of restrictions, I show when planning is tractable (polynomial) and intractable (NPhard). In general, it is... | Systemic Nonlinear Planning This paper presents a simple, sound, complete, and systematic algorithm for domain independent STRIPS planning. Simplicity is achieved by starting with a ground procedure and then applying a general, and independently verifiable, lifting transformation. Previous planners have been designed directly as lifted procedures. Our ground procedure is a ground version of Tate''s NONLIN procedure. In Tate''s procedure one is not required to determine whether a prerequisite of a step in an unfinished plan is guaranteed to hold in all linearizations. This allows Tate''s procedure to avoid the use of Chapman''s modal truth criterion. Systematicity is the property that the same plan, or partial plan, is never examined more than once. | Graph Minors. XX. Wagner's conjecture We prove Wagner's conjecture, that for every infinite set of finite graphs, one of its members is isomorphic to a minor of another. | The Google file system We have designed and implemented the Google File System, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. While sharing many of the same goals as previous distributed file systems, our design has been driven by observations of our application workloads and technological environment, both current and anticipated, that reflect a marked departure from some earlier file system assumptions. This has led us to reexamine traditional choices and explore radically different design points. The file system has successfully met our storage needs. It is widely deployed within Google as the storage platform for the generation and processing of data used by our service as well as research and development efforts that require large data sets. The largest cluster to date provides hundreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients. In this paper, we present file system interface extensions designed to support distributed applications, discuss many aspects of our design, and report measurements from both micro-benchmarks and real world use. | The Camelot Project | Learning Structured Embeddings of Knowledge Bases. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.022881 | 0.024699 | 0.023596 | 0.02 | 0.012929 | 0.007865 | 0.00249 | 0.000611 | 0.000097 | 0.000001 | 0 | 0 | 0 | 0 |
Object Models for Distributed or Persistent Programming As use of object orientation for application development has increased, many researchers have investigated the design of object-based programming languages for the distributed and persistent programming. This paper concentrates on reviewing a number of object-oriented languages for distributed or persistent programming. In each case, the focus is on the object model supported and the mechanisms and policies employed in the implementation of distributed or persistent objects. In particular, each language reviewed has been chosen to illustrate a particular object model or implementation strategy. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | Parameterized complexity for the database theorist | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Automatic feature constructing from vibration signals for machining state monitoring Machining state monitoring is an important subject for intelligent manufacturing. Feature construction is accepted to be the most critical procedure for a signal-based monitoring system and has attracted a lot of research interest. The traditional manual constructing way is skill intensive and the performance cannot be guaranteed. This paper presented an automatic feature construction method which can reveal the inherent relationship between the input vibration signals and the output machining states, including idling moving, stable cutting and chatter, using a reasonable and mathematical way. Firstly a large signal set is carefully prepared by a series of machining experiments followed by some necessary preprocessing. And then, a deep belief network is trained on the signal set to automatically construct features using the two step training procedure, namely unsupervised greedily layer-wise pertaining and supervised fine-tuning. The automatically extracted features can exactly reveal the connection between the vibration signal and the machining states. Using the automatic extracted features, even a linear classifier can easily achieve nearly 100% modeling accuracy and wonderful generalization performance, besides good repeatability precision on a large well prepared signal set. For the actual online application, voting strategy is introduced to smooth the predicted states and make the final state identification to ensure the detection reliability by taking consideration of the machining history. Experiments proved the proposed method to be efficient in protecting the workpiece from serious chatter damage. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Abstract branching for quantified formulas We introduce a novel search-based decision procedure for Quantified Boolean Formulas (QBFs), called Abstract Branching. As opposed to standard search-based procedures, it escapes the burdensome need for branching on both children of every universal node in the search tree. This is achieved by branching on existential variables only, while admissible universal assignments are inferred. Running examples and experimental results are reported. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Classification of lung sounds using convolutional neural networks. In the field of medicine, with the introduction of computer systems that can collect and analyze massive amounts of data, many non-invasive diagnostic methods are being developed for a variety of conditions. In this study, our aim is to develop a non-invasive method of classifying respiratory sounds that are recorded by an electronic stethoscope and the audio recording software that uses various machine learning algorithms. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Randomized Denoising Autoencoders for Smaller and Efficient Imaging Based AD Clinical Trials. There is growing body of research devoted to designing imaging-based biomarkers that identify Alzheimer's disease (AD) in its prodromal stage using statistical machine learning methods. Recently several authors investigated how clinical trials for AD can be made more efficient (i.e., smaller sample size) using predictive measures from such classification methods. In this paper, we explain why predictive measures given by such SVM type objectives may be less than ideal for use in the setting described above. We give a solution based on a novel deep learning model, randomized denoising autoencoders (rDA), which regresses on training labels y while also accounting for the variance, a property which is very useful for clinical trial design. Our results give strong improvements in sample size estimates over strategies based on multi-kernel learning. Also, rDA predictions appear to more accurately correlate to stages of disease. Separately, our formulation empirically shows how deep architectures can be applied in the large d, small n regime - the default situation in medical imaging. This result is of independent interest. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
CAAD BLASTP 2.0: NCBI BLASTP accelerated with pipelined filters. | CAAD BLASTP: NCBI BLASTP Accelerated with FPGA-Based Accelerated Pre-Filtering NCBI BLAST has become the de facto standard in bioinformatic approximate string matching and so its acceleration is of fundamental importance.The problem is that it uses complex heuristics which make it difficult to simultaneously achieve both substantial speed-up and exact agreement with the original output.Our approach is to prefilter the database.To make this work we have developed a novel heuristic which we append to a previously described structure for ungapped alignment.This enables us to quickly reduce the database by factors of 300 and 1100, for the ungapped and gapped options, respectively, while rejecting no significant sequences.On current hardware we anticipate a speed-up of at least a factor of 10 for NCBI BLASTP, independent of sensitivity settings.This filter is portable to other BLAST codes, and other filters can be similarly integrated into NCBI BLAST. | A rate-based prefiltering approach to blast acceleration DNA sequence comparison and database search have evolved in the last years as a field of strong competition between several reconfigurable hardware computing groups. In this paper we present a BLAST preprocessor that efficiently marks the parts of the database that may produce matches. Our prefiltering approach offers significant reduction in the size of the database that needs to be fully processed by BLAST, with a corresponding reduction in the run-time of the algorithm. We have implemented our architecture, evaluated its effectiveness for a variety of databases and queries, and compared its accuracy against the original NCBI Blast implementation. We have found that prefiltering offers at least a factor of 5 and up to 3 orders of magnitude reduction in the database space that needs to be fully searched. Due to its prefiltering nature, our approach can be combined with all major reconfigurable acceleration architectures that have been presented up to date. | Biological information signal processor The computation requirements for mapping and sequencing the human genome might soon exceed the capability of any existing supercomputer. The systolic array processor presented in this paper, called biological information signal processor (BISP), has the capability to satisfy the current and anticipated future computational requirements for performing sequence comparisons based on the T.F. Smith and M.S. Waterman algorithm (1981) as extended by M.S. Waterman and M. Eggert (1987). The BISP can conduct the most time consuming sequence comparison functions, establishing both global and local relationships between two sequences. A modified Smith and Waterman algorithm is presented in this paper for efficient VLSI implementation. Methods are developed to reduce the BISP systolic array I/O bandwidth problem by reporting only the statistical significant results. Estimated performance of the BISP is compared with several different computer architectures | RC-BLAST: Towards a Portable, Cost-Effective Open Source Hardware Implementation Basic Local Alignment Search Tool (BLAST) is a standard computer application that molecular biologists use to search for sequence similarity in genomic databases. This report describes the implementation of an FPGAbased hardware implementation designed to accelerate the BLAST algorithm. FPGA-based custom computing machines, more widely known as Reconfigurable Computing, are supported by a number of vendors and the basic cost of FPGA hardware is dramatically decreasing. Hence, the main objective of this project is to explore the feasibility of using this new technology to realize a portable, Open Source FPGA-based accelerator for the BLAST Algorithm. The present design is targeted to an AceIIcard and the design is based on the latest version of BLAST available from NCBI. Since the entire application does not fit in hardware, a profile study was conducted that identifies the computationally intensive part of BLAST. An FPGA hardware component has been designed and implemented for this critical segment. The portability and cost-effectiveness of the design are discussed. | Logic Programming and Negation: A Survey. We survey here various approaches which were proposed to incorporate negation in logicprograms. We concentrate on the proof-theoretic and model-theoretic issues and the relationshipsbetween them.1991 Mathematics Subject Classification: 68Q40, 68T15.CR Categories: F.3.2., F.4.1, H.3.3, I.2.3.Keywords and Phrases: negation, general logic programs, non-monotonic reasoning.Notes. The work of the first author was partly supported by ESPRIT Basic Research Action6810 (Compulog 2). The work... | On the Desirability of Acyclic Database Schemes A class of database schemes, called acychc, was recently introduced. It is shown that this class has a number of desirable properties. In particular, several desirable properties that have been studied by other researchers m very different terms are all shown to be eqmvalent to acydicity. In addition, several equivalent charactenzauons of the class m terms of graphs and hypergraphs are given, and a smaple algorithm for determining acychclty is presented. Also given are several eqmvalent characterizations of those sets M of multivalued dependencies such that M is the set of muRlvalued dependencies that are the consequences of a given join dependency. Several characterizations for a conflict-free (in the sense of Lien) set of muluvalued dependencies are provided. | Consensus and Cooperation in Networked Multi-Agent Systems? This paper provides a theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow, robustness to changes in network topology due to link/node failures, time-delays, and performance guarantees. An overview of basic concepts of information consensus in networks and methods of convergence and performance analys... | Reasoning about action I: a possible worlds approach Reasoning about change is an important aspect of commonsense reasoning and planning.In this paper we describe an approach to reasoning about change for rich domains whereit is not possible to anticipate all situations that might occur. The approach provides asolution to the frame problem, and to the related problem that it is not always reasonable toexplicitly specify all of the consequences of actions. The approach involves keeping a singlemodel of the world that is updated when actions... | Compilability of Domain Descriptions in the Language A | EVENODD: an efficient scheme for tolerating double disk failures in RAID architectures We present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (i.e., two extra disks) is based on Reed-Solomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error | Parity logging overcoming the small write problem in redundant disk arrays Parity encoded redundant disk arrays provide highly reliable, cost effective secondary storage with high performance for read accesses and large write accesses. Their performance on small writes, however, is much worse than mirrored disks—the traditional, highly reliable, but expensive organization for secondary storage. Unfortunately, small writes are a substantial portion of the I/O workload of many important, demanding applications such as on-line transaction processing. This paper presents parity logging, a novel solution to the small write problem for redundant disk arrays. Parity logging applies journalling techniques to substantially reduce the cost of small writes. We provide a detailed analysis of parity logging and competing schemes—mirroring, floating storage, and RAID level 5— and verify these models by simulation. Parity logging provides performance competitive with mirroring, the best of the alternative single failure tolerating disk array organizations. However, its overhead cost is close to the minimum offered by RAID level 5. Finally, parity logging can exploit data caching much more effectively than all three alternative approaches. | On the Complexity of Plan Adaptation by Derivational Analogy in a Universal Classical Planning Framework In this paper we present an algorithm called DerUCP, which can be regarded as a general model for plan adaptation using Derivational Analogy. Using DerUCP, we show that previous results on the complexity of plan adaptation do not apply to Derivational Analogy. We also show that Derivational Analogy can potentially produce exponential reductions in the size of the search space generated by a planning system. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.2 | 0.066667 | 0.022222 | 0.016667 | 0.007692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A hierarchical structure for fault tolerant reactive programs A new approach to software fault tolerance in concurrent programs modeled as reactive systems k proposed. It is based on a hierarchical structure and on the combined use of different fault tolerant schemes (e.g. transaction to protect data and conversation like scheme to protect processes). Among the merits of this new approach there is the possibility of an effective use of different programming languages to implement diverse software versions also in concurrent programs. | Distributed, object-based programming systems The development of distributed operating systems and object-based programming languages makes possible an environment in which programs consisting of a set of interacting modules, or objects, may execute concurrently on a collection of loosely coupled processors. An object-based programming language encourages a methodology for designing and creating a program as a set of autonomous components, whereas a distributed operating system permits a collection of workstations or personal computers to be treated as a single entity. The amalgamation of these two concepts has resulted in systems that shall be referred to as distributed, object-based programming systems. This paper discusses issues in the design and implementation of such systems. Following the presentation of fundamental concepts and various object models, issues in object management, object interaction management, and physical resource management are discussed. Extensive examples are drawn from existing systems. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Logic programs with classical negation | The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz. | Solving Advanced Reasoning Tasks Using Quantified Boolean Formulas We consider the compilation of different reasoning tasks into the evaluation problem of quantified boolean formulas (QBFs) as an approach to develop prototype reasoning sys- tems useful for, e.g., experimental purposes. Such a method is a natural generalization of a similar technique applied to NP-problems and has been recently proposed by other re- searchers. More specifically, we present translations of sev- eral well-known reasoning tasks from the area of nonmono- tonic reasoning into QBFs, and compare their implementa- tion in the prototype system QUIP with established NMR- provers. The results show reasonable performance, and docu- ment that the QBF approach is an attractive tool for rapid pro- totyping of experimental knowledge-representation systems. | Object Recognition from Local Scale-Invariant Features An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection.These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales.The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds. | Support-Vector Networks The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. | Improving the I/O Performance of Real-Time Database Systems with Multiple-Disk Storage Structures | Simultaneous Localization And Mapping With Sparse Extended Information Filters In this paper we describe a scalable algorithm for the simultaneous mapping and localization (SLAM) problem. SLAM is the problem of acquiring a map of a static environment with a mobile robot. The vast majority of SLAM algorithms are based on the extended Kahnan filter (EKF), In this paper we advocate an algorithm that relies on the dual of the EKE the extended information filter (EIF). We show that when represented in the information form, map posteriors are dominated by a small number of links that tie together nearby features in the map. This insight is developed into a sparse variant of the EIF called the sparse extended information filter (SEIF). SEIFs represent maps by graphical networks of features that are locally interconnected, where links represent relative information between pairs of nearby,features, as well as information about the robot's pose relative to the map. We show that all essential update equations in SEIFs can be executed in constant time, irrespective of the size of the map. We also provide empirical results obtained for a benchmark data set collected in an outdoor environment, and using a multi-robot mapping simulation. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.2 | 0.001575 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Planning in nondeterministic domains under partial observability via symbolic model checking Planning under partial observability is one of the most significant and challenging planning problems. It has been shown to be hard, both theoretically and experimentally. In this paper, we present a novel approach to the problem of planning under partial observability in non-deterministic domains. We propose an algorithm that searches through a (possibly cyclic) and-or graph induced by the domain. The algorithm generates conditional plans that are guaranteed to achieve the goal despite of the uncertainty in the initial condition, the uncertain effects of actions, and the partial observability of the domain. We implement the algorithm by means of BDD-based, symbolic model checking techniques, in order to tackle in practice the exponential blow up of the search space. We show experimentally that our approach is practical by evaluating the planner with a set of problems taken from the literature and comparing it with other state of the art planners for partially observable domains. | Mapping conformant planning into SAT through compilation and projection Conformant planning is a variation of classical AI planning where the initial state is partially known and actions can have non-deterministic effects. While a classical plan must achieve the goal from a given initial state using deterministic actions, a conformant plan must achieve the goal in the presence of uncertainty in the initial state and action effects. Conformant planning is computationally harder than classical planning, and unlike classical planning, cannot be reduced polynomially to SAT (unless P = NP). Current SAT approaches to conformant planning, such as those considered by Giunchiglia and colleagues, thus follow a generate-and-test strategy: the models of the theory are generated one by one using a SAT solver (assuming a given planning horizon), and from each such model, a candidate conformant plan is extracted and tested for validity using another SAT call. This works well when the theory has few candidate plans and models, but otherwise is too inefficient. In this paper we propose a different use of a SAT engine where conformant plans are computed by means of a single SAT call over a transformed theory. This transformed theory is obtained by projecting the original theory over the action variables. This operation, while intractable, can be done efficiently provided that the original theory is compiled into d–DNNF (Darwiche 2001), a form akin to OBDDs (Bryant 1992). The experiments that are reported show that the resulting compile-project-sat planner is competitive with state-of-the-art optimal conformant planners and improves upon a planner recently reported at ICAPS-05. | Improving Heuristics for Planning as Search in Belief Space Search in the space of beliefs has been proposed as a con- venient framework for tackling planning under uncertainty. Significant improvements have been recently achieved, espe- cially thanks to the use of symbolic model checking tech- niques such as Binary Decision Diagrams. However, the problem is extremely complex, and the heuristics available so far are unable to provide enough guidance for an informed search. In this paper we tackle the problem of defining effective heuristics for driving the search in belief space. The basic intuition is that the "degree of knowledge" associated with the belief states reached by partial plans must be explicitly taken into account when deciding the search direction. We propose a way of ranking belief states depending on their de- gree of knowledge with respect to a given set of boolean func- tions. This allows us to define a planning algorithm based on the identification and solution of suitable "knowledge sub- goals", that are used as intermediate steps during the search. The solution of knowledge subgoals is based on the identifi- cation of "knowledge acquisition conditions", i.e. subsets of the state space from where it is possible to perform knowl- edge acquisition actions. We show the effectiveness of the proposed ideas by observing substantial improvements in the conformant planning algorithms of MBP. | OBDD-based universal planning for synchronized agents in non-deterministic domains Recently model checking representation and search techniques were shown to be efficiently applicable to planning, in particular to non-deterministic planning. Such planning approaches use Ordered Binary Decision Diagrams (OBDDS) to encode a planning domain as a non-deterministic finite automaton and then apply fast algorithms from model checking to search for a solution. OBDDS can effectively scale and can provide universal plans for complex planning domains. We are particularly interested in addressing the complexities arising in non-deterministic, multi-agent domains. In this article, we present UMOP, a new universal OBDD-based planning framework for non-deterministic, multi-agent domains. We introduce a new planning domain description language, NADL, to specify non-deterministic, multi-agent domains. The language contributes the explicit definition of controllable agents and uncontrollable environment agents. We describe the syntax and semantics of NADL and show how to build an efficient OBDD-based representation of an NADL description. The UMOP planning system uses NADL and different OBDD-based universal planning algorithms. It includes the previously developed strong and strong cyclic planning algorithms. In addition, we introduce our new optimistic planning algorithm that relaxes optimality guarantees and generates plausible universal plans in some domains where no strong nor strong cyclic solution exists. We present empirical results applying UMOP to domains ranging from deterministic and single-agent with no environment actions to non-deterministic and multi-agent with complex environment actions. UMOP is shown to be a rich and efficient planning system. | A Framework for Planning with Extended Goals under Partial Observability Planning in nondeterministic domains with temporally ex- tended goals under partial observability is one of the most challenging problems in planning. Subsets of this problem have been already addressed in the literature. For instance, planning for extended goals has been developed under the simplifying hypothesis of full observability. And the problem of a partial observability has been tackled in the case of sim- ple reachability goals. The general combination of extended goals and partial observability is, to the best of our knowl- edge, still an open problem, whose solution turns out to be by no means trivial. In this paper we do not solve the problem in its generality, but we perform a significant step in this direction by pro- viding a solid basis for tackling it. Our first contribution is the definition of a general framework that encompasses both partial observability and temporally extended goals, and that allows for describing complex, realistic domains and signif- icant goals over them. A second contribution is the defini- tion of the K-CTL goal language, that extends CTL (a clas- sical language for expressing temporal requirements) with a knowledge operator that allows to reason about the informa- tion that can be acquired at run-time. This is necessary to deal with partially observable domains, where only limited run-time "knowledge" about the domain state is available. A general mechanism for plan validation with K-CTL goals is also defined. This mechanism is based on a monitor, that plays the role of evaluating the truth of knowledge predicates. | Some Results on the Complexity of Planning with Incomplete Information Planning with incomplete information may mean a number ofdifferent things; that certain facts of the initial state are not known, thatoperators can have random or nondeterministic effects, or that the planscreated contain sensing operations and are branching. Study of the complexityof incomplete information planning has so far been concentratedon probabilistic domains, where a number of results have been found. Weexamine the complexity of planning in nondeterministic propositional... | Conformant Graphplan Planning under uncertainty is a difficult task. If sensory information is available, it is possible to do contingency planning - that is, develop plans where certain branches are executed conditionally, based on the outcome of sensory actions. However, even without sensory information, it is often possible to develop useful plans that succeed no matter which of the allowed states the world is actually in. We refer to this type of planning as conformant planning.Few conformant planners have been built, partly because conformant planning requires the ability to reason about disjunction. In this paper we describe Conformant Graphplan (CGP), a Graphplan-based planner that develops sound (non-contingent) plans when faced with uncertainty in the initial conditions and in the outcome of actions. The basic idea is to develop separate plan graphs for each possible world. This requires some subtle changes to both the graph expansion and solution extraction phases of Oraphplan. In particular, the solution extraction phase must consider the unexpected side effects of actions in other possible worlds, and must confront any undesirable effects.We show that COP performs significantly better than two previous (probabilistic) conformant planners. | Decision-theoretic planning: Structural assumptions and computational leverage Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning, decision analysis, operations research, control theory and economics. While the assumptions and perspectives adopted in these areas often differ in substantial ways, many planning problems of interest to researchers in these fields can be modeled as Markov decision processes (MDPs) and analyzed using the techniques of decision theory. This paper presents an overview and synthesis of MDP-related methods, showing how they provide a unifying framework for modeling many classes of planning problems studied in AI. It also describes structural properties of MDPs that, when exhibited by particular classes of problems, can be exploited in the construction of optimal or approximately optimal policies or plans. Planning problems commonly possess structure in the reward and value functions used to describe performance criteria, in the functions used to describe state transitions and observations, and in the relationships among features used to describe states, actions, rewards, and observations. Specialized representations, and algorithms employing these representations, can achieve computational leverage by exploiting these various forms of structure. Certain AI techniques-in particular those based on the use of structured, intensional representations-can be viewed in this way. This paper surveys several types of representations for both classical and decision-theoretic planning problems, and planning algorithms that exploit these representations in a number of different ways to ease the computational burden of constructing policies or plans. It focuses primarily on abstraction, aggregation and decomposition techniques based on AI-style representations. | QuBE++: An Efficient QBF Solver In this paper we describe QuBE++, an efficient solver for Quantified Boolean Formulas (QBFs). To the extent of our knowledge, QUBE++ is the first QBF reasoning engine that uses lazy data structures both for unit clauses propagation and for pure literals detection. QuBE++ also features non-chronological backtracking and a branching heuristic that leverages the information gathered during the backtracking phase. Owing to such techniques and to a careful implementation, QuBE++ turns out to be an efficient and robust solver, whose performances exceed those of other state-of-the-art QBF engines, and are comparable with the best engines currently available on SAT instances. | Narrative based Postdictive Reasoning for Cognitive Robotics. Making sense of incomplete and conflicting narrative knowledge in the presence of abnormalities, unobservable processes, and other real world considerations is a challenge and crucial requirement for cognitive robotics systems. An added challenge, even when suitably specialised action languages and reasoning systems exist, is practical integration and application within large-scale robot control frameworks. In the backdrop of an autonomous wheelchair robot control task, we report on application-driven work to realise postdiction triggered abnormality detection and re-planning for real-time robot control: (a) Narrative-based knowledge about the environment is obtained via a larger smart environment framework; and (b) abnormalities are postdicted from stable-models of an answer-set program corresponding to the robot's epistemic model. The overall reasoning is performed in the context of an approximate epistemic action theory based planner implemented via a translation to answer-set programming. | The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz. | Intensive Data Management in Parallel Systems: A Survey In this paper we identify and discuss issues that arerelevant to the design and usage of databases handling massiveamounts of data in parallel environments. The issues that are tackledinclude the placement of the data in the memory, file systems,concurrent access to data, effects on query processing, and theimplications of specific machine architectures. Since not allparameters are tractable in rigorous analysis, results of performanceand bench-marking studies are highlighted for several systems. | Representing Defeasible Constraints and Observations in Action Theories . We propose a general formulation of reasoning about actionbased on prioritized logic programming, where defeasibility handling isexplicitly taken into account. In particular, we consider two types of defeasibilitiesin our problem domains: defeasible constraints and defeasibleobservations. By introducing the notion of priority in action formulation,we show that our approach provides a unified framework to handle thesedefeasibilities in temporal prediction and postdiction reasoning with... | Improving Citation Polarity Classification With Product Reviews Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification. | 1.00947 | 0.014621 | 0.008519 | 0.008419 | 0.005598 | 0.003741 | 0.002342 | 0.000978 | 0.000125 | 0.000026 | 0.000001 | 0 | 0 | 0 |
Red-Black Planning: A New Tractability Analysis and Heuristic Function | Discovering hidden structure in factored MDPs Markov Decision Processes (MDPs) describe a wide variety of planning scenarios ranging from military operations planning to controlling a Mars rover. However, today@?s solution techniques scale poorly, limiting MDPs@? practical applicability. In this work, we propose algorithms that automatically discover and exploit the hidden structure of factored MDPs. Doing so helps solve MDPs faster and with less memory than state-of-the-art techniques. Our algorithms discover two complementary state abstractions - basis functions and nogoods. A basis function is a conjunction of literals; if the conjunction holds true in a state, this guarantees the existence of at least one trajectory to the goal. Conversely, a nogood is a conjunction whose presence implies the non-existence of any such trajectory, meaning the state is a dead end. We compute basis functions by regressing goal descriptions through a determinized version of the MDP. Nogoods are constructed with a novel machine learning algorithm that uses basis functions as training data. Our state abstractions can be leveraged in several ways. We describe three diverse approaches - GOTH, a heuristic function for use in heuristic search algorithms such as RTDP; ReTrASE, an MDP solver that performs modified Bellman backups on basis functions instead of states; and SixthSense, a method to quickly detect dead-end states. In essence, our work integrates ideas from deterministic planning and basis function-based approximation, leading to methods that outperform existing approaches by a wide margin. | Merge-and-Shrink Abstraction: A Method for Generating Lower Bounds in Factored State Spaces Many areas of computer science require answering questions about reachability in compactly described discrete transition systems. Answering such questions effectively requires techniques to be able to do so without building the entire system. In particular, heuristic search uses lower-bounding (“admissible”) heuristic functions to prune parts of the system known to not contain an optimal solution. A prominent technique for deriving such bounds is to consider abstract transition systems that aggregate groups of states into one. The key question is how to design and represent such abstractions. The most successful answer to this question are pattern databases, which aggregate states if and only if they agree on a subset of the state variables. Merge-and-shrink abstraction is a new paradigm that, as we show, allows to compactly represent a more general class of abstractions, strictly dominating pattern databases in theory. We identify the maximal class of transition systems, which we call factored transition systems, to which merge-and-shrink applies naturally, and we show that the well-known notion of bisimilarity can be adapted to this framework in a way that still guarantees perfect heuristic functions, while potentially reducing abstraction size exponentially. Applying these ideas to planning, one of the foundational subareas of artificial intelligence, we show that in some benchmarks this size reduction leads to the computation of perfect heuristic functions in polynomial time and that more approximate merge-and-shrink strategies yield heuristic functions competitive with the state of the art. | Improving Delete Relaxation Heuristics Through Explicitly Represented Conjunctions. Heuristic functions based on the delete relaxation compute upper and lower bounds on the optimal delete-relaxation heuristic lit and are of paramount importance in both optimal and satisficing planning. Here we introduce a principled and flexible technique for improving lit by augmenting delete-relaxed planning tasks with a limited amount of delete information. This is done by introducing special fluents that explicitly represent conjunctions of fluents in the original planning task, rendering the perfect heuristic h(+) in the limit. Previous work has introduced a method in which the growth of the task is potentially exponential in the number of conjunctions introduced. We formulate an alternative technique relying on conditional effects, limiting the growth of the task to be linear in this number. We show that this method still renders the perfect heuristic h(+) in the limit. We propose techniques to find an informative set of conjunctions to be introduced in different settings, and analyze and extend existing methods for lower-bounding and upper-bounding in the presence of conditional effects. We evaluate the resulting heuristic functions empirically on a set of IPC benchmarks, and show that they are sometimes much more informative than standard delete-relaxation heuristics. | The fast downward planning system Fast Downward is a classical planning system based on heuristic search. It can deal with general deterministic planning problems encoded in the propositional fragment of PDDL2.2, including advanced features like ADL conditions and effects and derived predicates (axioms). Like other well-known planners such as HSP and FF, Fast Downward is a progression planner, searching the space of world states of a planning task in the forward direction. However, unlike other PDDL planning systems, Fast Downward does not use the propositional PDDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multivalued planning tasks, which makes many of the implicit constraints of a propositional planning task explicit. Exploiting this alternative representation, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic function, called the causal graph heuristic, which is very different from traditional HSP-like heuristics based on ignoring negative interactions of operators. In this article, we give a full account of Fast Downward's approach to solving multivalued planning tasks. We extend our earlier discussion of the causal graph heuristic to tasks involving axioms and conditional effects and present some novel techniques for search control that are used within Fast Downward's best-first search algorithm: preferred operators transfer the idea of helpful actions from local search to global best-first search, deferred evaluation of heuristic functions mitigates the negative effect of large branching factors on search performance, and multiheuristic best-first search combines several heuristic evaluation functions within a single search algorithm in an orthogonal way. We also describe efficient data structures for fast state expansion (successor generators and axiom evaluators) and present a new non-heuristic search algorithm called focused iterative-broadening search, which utilizes the information encoded in causal graphs in a novel way. Fast Downward has proven remarkably successful: It won the "classical" (i. e., propositional, non-optimising) track of the 4th International Planning Competition at ICAPS 2004, following in the footsteps of planners such as FF and LPG. Our experiments show that it also performs very well on the benchmarks of the earlier planning competitions and provide some insights about the usefulness of the new search enhancements. | The nature of statistical learning theory~. First Page of the Article | TCP Nice: a mechanism for background transfers Many distributed applications can make use of large background transfers--transfers of data that humans are not waiting for--to improve availability, reliability, latency or consistency. However, given the rapid fluctuations of available network bandwidth and changing resource costs due to technology trends, hand tuning the aggressiveness of background transfers risks (1) complicating applications, (2) being too aggressive and interfering with other applications, and (3) being too timid and not gaining the benefits of background transfers. Our goal is for the operating system to manage network resources in order to provide a simple abstraction of near zero-cost background transfers. Our system, TCP Nice, can provably bound the interference inflicted by background flows on foreground flows in a restricted network model. And our microbenchmarks and case study applications suggest that in practice it interferes little with foreground flows, reaps a large fraction of spare network bandwidth, and simplifies application construction and deployment. For example, in our prefetching case study application, aggressive prefetching improves demand performance by a factor of three when Nice manages resources; but the same prefetching hurts demand performance by a factor of six under standard network congestion control. | LIBSVM: A library for support vector machines LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail. | The complexity of combinatorial problems with succinct input representation Several languages for the succinct representation of the instances of combinatorial problems are investigated. These languages have been introduced in [20, 2] and [5] where it has been shown that describing the instances by these languages causes a blow-up of the complexities of some problems. In the present paper the descriptional power of these languages is compared by estimating the complexities of some combinatorial problems in terms of completeness in suitable classes of the “counting polynomial-time hierarchy” which is introduced here. It turns out that some of the languages are not comparable, unless P=NP Some problems left open in [2] are solved. | Planning as search: a quantitative approach We present the thesis that planning can be viewed as problem-solving search using subgoals, macro-operators, and abstraction as knowledge sources. Our goal is to quantify problem-solving performance using these sources of knowledge. New results include the identification of subgoal distance as a fundamental measure of problem difficulty, a multiplicative time-space tradeoff for macro-operators, and an analysis of abstraction which concludes that abstraction hierarchies can reduce exponential problems to linear complexity. | Application performance and flexibility on exokernel systems The exokemel operating system architecture safely gives untrusted software efficient control over hardware and software resources by separating management from protection. This paper describes an exokemel system that allows specialized applications to achieve high performance without sacrificing the performance of unmodified UNIX programs. It evaluates the exokemel architecture by measuring end-to-end application performance on Xok, an exokernel for Intel x86-based computers, and by comparing Xok’s performance to the performance of two widely-used 4.4BSD UNIX systems (FreeBSD and OpenBSD). The results show that common unmodified UNIX applications can enjoy the benefits of exokernels: applications either perform comparably on Xok/ExOS and the BSD UNIXes, or perform significantly better. In addition, the results show that customized applications can benefit substantially from control over their resources (e.g., a factor of eight for a Web server). This paper also describes insights about the exokemel approach gained through building three different exokemel systems, and presents novel approaches to resource multiplexing. | Reasoning About Actions in Narrative Understanding Reasoning about actions has been a focus of interest in AI from the beginning and continues to receive attention. Rut the range of situations considered has been rather narrow and falls well short of what is needed for understanding natural language. Language understanding requires sophisticated reasoning about actions and events and the world's languages employ a variety of grammatical and lexical devices to construe, direct attention and focus on, and control inferences about actions and events. We implemented a neurally inspired computational model that is able to reason about, linguistic action and event descriptions, such as those found in news stories. The system uses an active. event representation that also seems to provide natural and cognitiveIy motivated solutions to classical problems in logical theories of reasoning about actions. For logical approaches to reasoning about actions, we suggest, that looking at story understanding sets up fairly strong desiderata both in terms of the fine-grained event and action distinctions and the kinds of real-time inferences required. | Representing the process semantics in the situation calculus This paper presents a formal method based on the high-level semantics of processes to reason about continuous change. With a case study we show how the semantics of processes can be integrated with the situation calculus. The soundness and completeness of situation calculus with respect to the process semantics are proven. Furthermore, the logical programming is implemented to support the semantics of processes with the situation calculus. | Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples. | 1.2 | 0.2 | 0.1 | 0.025 | 0.002222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Trends in extreme learning machines: A review. Extreme learning machine (ELM) has gained increasing interest from various research fields recently. In this review, we aim to report the current state of the theoretical research and practical advances on this subject. We first give an overview of ELM from the theoretical perspective, including the interpolation theory, universal approximation capability, and generalization ability. Then we focus on the various improvements made to ELM which further improve its stability, sparsity and accuracy under general or specific conditions. Apart from classification and regression, ELM has recently been extended for clustering, feature selection, representational learning and many other learning tasks. These newly emerging algorithms greatly expand the applications of ELM. From implementation aspect, hardware implementation and parallel computation techniques have substantially sped up the training of ELM, making it feasible for big data processing and real-time reasoning. Due to its remarkable efficiency, simplicity, and impressive generalization performance, ELM have been applied in a variety of domains, such as biomedical engineering, computer vision, system identification, and control and robotics. In this review, we try to provide a comprehensive view of these advances in ELM together with its future perspectives. | Stability analysis of Markovian jumping stochastic Cohen-Grossberg neural networks with mixed time delays. In this letter, the global asymptotical stability analysis problem is considered for a class of Markovian jumping stochastic Cohen-Grossberg neural networks (CGNNs) with mixed delays including discrete delays and distributed delays. An alternative delay-dependent stability analysis result is established based on the linear matrix inequality (LMI) technique, which can easily be checked by utilizing the numerically efficient Matlab LMI toolbox. Neither system transformation nor free-weight matrix via Newton-Leibniz formula is required. Two numerical examples are included to show the effectiveness of the result. | Deep extreme learning machines: supervised autoencoding architecture for classification. We present a method for synthesising deep neural networks using Extreme Learning Machines (ELMs) as a stack of supervised autoencoders. We test the method using standard benchmark datasets for multi-class image classification (MNIST, CIFAR-10 and Google Streetview House Numbers (SVHN)), and show that the classification error rate can progressively improve with the inclusion of additional autoencoding ELM modules in a stack. Moreover, we found that the method can correctly classify up to 99.19% of MNIST test images, which surpasses the best error rates reported for standard 3-layer ELMs or previous deep ELM approaches when applied to MNIST. The approach simultaneously offers a significantly faster training algorithm to achieve its best performance (in the order of 5min on a four-core CPU for MNIST) relative to a single ELM with the same total number of hidden units as the deep ELM, hence offering the best of both worlds: lower error rates and fast implementation. | Semi-supervised and unsupervised extreme learning machines. Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency. | Learning methods for generic object recognition with invariance to pose and lighting We assess the applicability of several popular learning methods for the problem of recognizing generic visual categories with invariance to pose, lighting, and surrounding clutter. A large dataset comprising stereo image pairs of 50 uniform-colored toys under 36 azimuths, 9 elevations, and 6 lighting conditions was collected (for a total of 194,400 individual images). The objects were 10 instances of 5 generic categories: four-legged animals, human figures, airplanes, trucks, and cars. Five instances of each category were used for training, and the other five for testing. Low-resolution grayscale images of the objects with various amounts of variability and surrounding clutter were used for training and testing. Nearest Neighbor methods, Support Vector Machines, and Convolutional Networks, operating on raw pixels or on PCA-derived features were tested. Test error rates for unseen object instances placed on uniform backgrounds were around 13% for SVM and 7% for Convolutional Nets. On a segmentation/recognition task with highly cluttered images, SVM proved impractical, while Convolutional nets yielded 16/7% error. A real-time version of the system was implemented that can detect and classify objects in natural scenes at around 10 frames per second. | Extracting and composing robust features with denoising autoencoders Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite. | Classification using discriminative restricted Boltzmann machines Recently, many applications for Restricted Boltzmann Machines (RBMs) have been developed for a large variety of learning problems. However, RBMs are usually used as feature extractors for another learning algorithm or to provide a good initialization for deep feed-forward neural network classifiers, and are not considered as a standalone solution to classification problems. In this paper, we argue that RBMs provide a self-contained framework for deriving competitive non-linear classifiers. We present an evaluation of different learning algorithms for RBMs which aim at introducing a discriminative component to RBM training and improve their performance as classifiers. This approach is simple in that RBMs are used directly to build a classifier, rather than as a stepping stone. Finally, we demonstrate how discriminative RBMs can also be successfully employed in a semi-supervised setting. | On the power of small-depth threshold circuits The power of threshold circuits of small depth is investigated. In particular, functions that require exponential-size unweighted threshold circuits of depth 3 when the bottom fan-in is restricted are given. It is proved that there are monotone functions f/sub k/ that can be computed on depth k and linear size AND, OR circuits but require exponential-size to be computed by a depth-(k-1) monotone weighted threshold circuit. | Loss Functions for Discriminative Training of Energy-Based Models. Probabilistic graphical models associate a prob- ability to each configuration of the relevant vari- ables. Energy-based models (EBM) associate an energy to those configurations, eliminating the need for proper normalization of probability dis- tributions. Making a decision (an inference) with an EBM consists in comparing the energies asso- ciated with various configurations of the variable to be predicted, and choosing the one with the smallest energy. Such systems must be trained discriminatively to associate low energies to the desired configurations and higher energies to un- desired configurations. A wide variety of loss function can be used for this purpose. We give sufficient conditions that a loss function should satisfy so that its minimization will cause the sys- tem to approach to desired behavior. We give many specific examples of suitable loss func- tions, and show an application to object recog- nition in images. | Trace driven analysis of write caching policies for disks The I/O subsystem in a computer system is becoming the bottleneck as a result of recent dramatic improvements in processor speeds. Disk caches have been effective in closing this gap but the benefit is restricted to the read operations as the write I/Os are usually committed to disk to maintain consistency and to allow for crash recovery. As a result, write I/O traffic is becoming dominant and solutions to alleviate this problem are becoming increasingly important. A simple solution which can easily work with existing tile systems is to use non-volatile disk caches together with a write-behind strategy. In this study, we look at the issues around managing such a cache using a detailed trace driven simulation. Traces from three different commercial sites are used in the analysis of various policies for managing the write cache.We observe that even a simple write-behind policy for the write cache is effective in reducing the total number of writes by over 50%. We further observe that the use of hysteresis in the policy to purge the write cache, with two thresholds, yields substantial improvement over a single threshold scheme. The inclusion of a mechanism to piggyback blocks from the write cache with read miss I/Os further reduces the number of writes to only about 15% of the original total number of write operations. We compare two piggybacking options and also study the impact of varying the write cache size. We briefly looked at the case of a single non-volatile disk cache to estimate the performance impact of statically partitioning the cache for reads and writes. | Robust Video Fingerprinting for Content-Based Video Identification Video fingerprints are feature vectors that uniquely characterize one video clip from another. The goal of video fingerprinting is to identify a given video query in a database (DB) by measuring the distance between the query fingerprint and the fingerprints in the DB. The performance of a video fingerprinting system, which is usually measured in terms of pairwise independence and robustness, is directly related to the fingerprint that the system uses. In this paper, a novel video fingerprinting method based on the centroid of gradient orientations is proposed. The centroid of gradient orientations is chosen due to its pairwise independence and robustness against common video processing steps that include lossy compression, resizing, frame rate change, etc. A threshold used to reliably determine a fingerprint match is theoretically derived by modeling the proposed fingerprint as a stationary ergodic process, and the validity of the model is experimentally verified. The performance of the proposed fingerprint is experimentally evaluated and compared with that of other widely-used features. The experimental results show that the proposed fingerprint outperforms the considered features in the context of video fingerprinting. | On Periodic Resource scheduling for Continuous-Media Databases. | RAID 6 Hardware Acceleration Inexpensive, reliable hard disk storage is increasingly required in both businesses and the home. As disk capacities increase and multiple drives are combined in one system the probability of multiple disk failures increases. Through the adoption of RAID 6 the capability to recover from up to two simultaneous disk failures becomes available. In this article, we present three different RAID 6 implementations each tailored to support different target applications and optimized to reduce overall hardware resource utilization. We present an optimal Reed-Solomon-based RAID 6 implementation for arrays of four disks. We also present the smallest in terms of hardware resource utilization as well having the highest throughput RAID 6 hardware solution for disk arrays of up to 15 drives. Finally, we present an implementation supporting up to 255 disks in a single array. | Anatomical Structure Sketcher For Cephalograms By Bimodal Deep Learning The lateral cephalogram is a commonly used medium to acquire patient-specific morphology for diagnose and treatment planning in clinical dentistry. The robust anatomical structure detection and accurate annotation remain challenging considering the personal skeletal variations and image blurs caused by device-specific projection magnification, together with structure overlapping in the lateral cephalograms. We propose a novel cephalogram sketcher system, where the contour extraction of anatomical structures is formulated as a cross-modal morphology transfer from regular image patches to arbitrary curves. Specifically, the image patches of structures of interest are located by a hierarchical pictorial model. The automatic contour sketcher converts the image patch to a morphable boundary curve via a bimodal deep Boltzmann machine. The deep machine learns a joint representation of patch textures and contours, and forms a path from one modality (patches) to the other (contours). Thus, the sketcher can infer the contours by alternating Gibbs sampling along the path in a manner similar to the data completion. The proposed method is robust not only to structure detection, but also tends to produce accurate structure shapes and landmarks even in blurry X-ray images. The experiments performed on clinically captured cephalograms demonstrate the effectiveness of our method. | 1.050641 | 0.025 | 0.025 | 0.009712 | 0.000992 | 0.000367 | 0.000023 | 0.000006 | 0.000001 | 0 | 0 | 0 | 0 | 0 |
File system design for an NFS file server appliance Network Appliance Corporation recently began shipping a new kind of network server called an NFS file server appliance, which is a dedicated server whose sole function is to provide NFS file service. The file system requirements for an NFS appliance are different from those for a general-purpose UNIX system, both because an NFS appliance must be optimized for network file access and because an appliance must be easy to use. This paper describes WAFL (Write Anywhere File Layout), which is a file system designed specifically to work in an NFS appliance. The primary focus is on the algorithms and data structures that WAFL uses to implement Snapshotst, which are read-only clones of the active file system. WAFL uses a copy-on-write technique to minimize the disk space that Snapshots consume. This paper also describes how WAFL uses Snapshots to eliminate the need for file system consistency checking after an unclean shutdown. | LegionFS: a secure and scalable file system supporting cross-domain high-performance applications Realizing that current file systems can not cope with the diverse requirements of wide-area collaborations, researchers have developed data access facilities to meet their needs. Recent work has focused on comprehensive data access architectures. In order to fulfill the evolving requirements in this environment, we suggest a more fully-integrated architecture built upon the fundamental tenets of naming, security, scalability, extensibility, and adaptability. These form the underpinning of the Legion File System (LegionFS). This paper motivates the need for these requirements and presents benchmarks that highlight the scalability of LegionFS. LegionFS aggregate throughput follows the linear growth of the network, yielding an aggregate read bandwidth of 193.8 MB/s on a 100 Mbps Ethernet backplane with 50 simultaneous readers. The serverless architecture of LegionFS is shown to benefit important scientific applications, such as those accessing the Protein Data Bank, within both local- and wide-area environments. | Using MEMS-based storage in disk arrays Current disk arrays, the basic building blocks of high-performance storage systems, are built around two memory technologies: magnetic disk drives, and non-volatile DRAM caches. Disk latencies are higher by six orders of magnitude than non-volatile DRAM access times, but cache costs over 1000 times more per byte. A new storage technology based on microelectromechanical systems (MEMS) will soon offer a new set of performance and cost characteristics that bridge the gap between disk drives and the caches. We evaluate potential gains in performance and cost by incorporating MEMS-based storage in disk arrays. Our evaluation is based on exploring potential placements of MEMS-based storage in a disk array. We used detailed disk array simulators to replay I/O traces of real applications for the evaluation. We show that replacing disks with MEMS-based storage can improve the array performance dramatically, with a cost performance ratio several times better than conventional arrays even if MEMS storage costs ten times as much as disk. We also demonstrate that hybrid MEMS/disk arrays, which cost less than purely MEMS-based arrays, can provide substantial improvements in performance and cost/performance over conventional arrays. | A new approach to I/O performance evaluation: self-scaling I/O benchmarks, predicted I/O performance Current I/O benchmarks suffer from several chronic problems: they quickly become obsolete, they do not stress the I/O system, and they do not help in understanding I/O system performance. We propose a new approach to I/O performance analysis. First, we propose a self-scaling benchmark that dynamically adjusts aspects of its workload according to the performance characteristic of the system being measured. By doing so, the benchmark automatically scales across current and future systems. The evaluation aids in understanding system performance by reporting how performance varies according to each of fie workload parameters. Second, we propose predicted performance, a technique for using the results from the self-scaling evaluation to quickly estimate the performance for workloads that have not been measured. We show that this technique yields reasonably accurate performance estimates and argue that this method gives a far more accurate comparative performance evaluation than traditional single point benchmarks. We apply our new evaluation technique by measuring a SPARCstation 1+ with one SCSI disk, an HP 730 with one SCSI-II disk, a Sprite LFS DECstation 5000/200 with a three-disk disk array, a Convex C240 minisupercomputer with a four-disk disk array, and a Solbourne 5E/905 fileserver with a two-disk disk array. | Fast consistency checking for the Solaris file system Our Netra NFS group at Sun set out to solve the challenging problem of providing remote Network File System (NFS) service with high performance and availability. An NFS server must guarantee the permanence of changes to the file system before acknowledging an NFS request. Thus, the server's underlying local file system must perform update operations synchronously to stable storage with potentially high latency. Our solution to this problem involves using the Solaris Unix File System (UFS), derived from the Berkeley Fast File System (FFS), in conjunction with nonvolatile RAM (NVRAM) as fast stable storage. We evaluated the system using the LADDIS benchmark and as a result, developed a cacheing technique for block-mapping information that gave us a 23% increase in measured server throughput in our standard RAID-5 server configuration. With recent increases in disk capacity and RAID technology, filesystem sizes have reached a point not imagined by the FFS designers, requiring an approach to checking file-system consistency that does not grow proportionately with file-system size. We examined several log-based solutions to providing fast crash recovery, but none could use the NVRAM effectively and meet our performance requirements. As an alternative, we developed an approach that uses UFS but maintains file-system working-set information, so that the consistency checker needs to examine only the active portions of a file system. This approach met our performance goals and also reduced file-system consistency-checking times to between 3% and 25% of those in the original UFS implementation. | A comparison of FFS disk allocation policies The 4.4BSD file system includes a new algorithm for allocating disk blocks to files. The goal of this algorithm is to improve file clustering, increasing the amount of sequential I/O when reading or writing files, thereby improving file system performance. In this paper we study the effectiveness of this algorithm at reducing file system fragmentation. We have created a program that artificially ages a file system by replaying a workload similar to that experienced by a real file system. We used this program to evaluate the effectiveness of the new disk allocation algorithm by replaying ten months of activity on two file systems that differed only in the disk allocation algorithms that they used. At the end of the ten month simulation, the file system using the new allocation algorithm had approximately half the fragmentation of a similarly aged file system that used the traditional disk allocation algorithm. Measuring the performance difference between the two file systems by reading and writing the same set of files on the two systems showed that this decrease in fragmentation improved file write throughput by 20% and read throughput by 32%. In certain test cases, the new allocation algorithm provided a performance improvement of greater than 50%. | Efficient Placement of Parity and Data to Tolerate Two Disk Failures in Disk Array Systems In this paper, we deal with the data/parity placement problem which is described as follows: how to place data and parity evenly across disks in order to tolerate two disk failures, given the number of disks N and the redundancy rate p which represents the amount of disk spaces to store parity information. To begin with, we transform the data/parity placement problem into the problem of constructing an N脳N matrix such that the matrix will correspond to a solution to the problem. The method to construct a matrix has been proposed and we have shown how our method works through several illustrative examples. It is also shown that any matrix constructed by our proposed method can be mapped into a solution to the placement problem if a certain condition holds between N and p where N is the number of disks and p is a redundancy rate. | The Performance of Parity Placements in Disk Arrays Due to recent advances in central processing unit (CPU) and memory system performance, input/output (I/O) systems are increasingly limiting the performance of modern computer systems. Redundant arrays of inexpensive disks (RAID) have been proposed to meet the impending I/O crisis. RAIDs substitute many small inexpensive disks for a few large expensive disks to provide higher performance, smaller footprints, and lower power consumption at a lower cost than the large expensive disks they replace. RAIDs provide high availability by using parity encoding of data to survive disk failures. It is shown that the way parity is distributed in a RAID has significant consequences for performance. The performances of eight different parity placements are investigated using simulation. | Specifying data availability in multi-device file systems | The TickerTAIP parallel RAID architecture Traditional disk arrays have a centralized architecture, with a single controller through which all requests flow. Such a controller is a single point of failure, and its performance limits the maximum size that the array can grow to. We describe here TickerTAIP, a parallel architecture for disk arrays that distributed the controller functions across several loosely-coupled processors. The result is better scalability, fault tolerance, and flexibility.
This paper presents the TickerTAIP architecture and an evaluation of its behavior. We demonstrate the feasibility by an existence proof; describe a family of distributed algorithms for calculating RAID parity; discuss techniques for establishing request atomicity, sequencing and recovery; and evaluate the performance of the TickerTAIP design in both absolute terms and by comparison to a centralized RAID implementation. We conclude that the TickerTAIP architectural approach is feasible, useful, and effective. | A Case for Fault-Tolerant Memory for Transaction Processing | Answer set programming and plan generation The idea of answer set programming is to represent a given computational problem by a logic program whose answer sets correspond to solutions, and then use an answer set solver, such as SMODELS or DLV, to find an answer set for this program. Applications of this method to planning are related to the line of research on the frame problem that started with the invention of formal nonmonotonic reasoning in 1980. | Disaster recovery techniques for database systems The widespread use of computers has brought about revolutionary changes in society. Computers are becoming vital in all aspects of human life, whether employed in life-critical systems such as air traffic control and autopilot navigation control systems, or in point-of-sales management systems and cinema ticket purchasing systems. Data stored in computer systems is often a company’s most valuable asset, one that must be protected at all costs. Businesses also must be prepared to provide continued service in case of a disaster. Fault-tolerance techniques have been employed to increase computer system availability, and to reduce the damage caused by component failure. Vital data is stored on stable storage, which survives failures such as electrical outages or system crashes. Also, redundant copies of data can be placed on multiple stable storage devices. This approach protects data if failures in storage media are independent, but may be ineffective if disaster strikes. Recall the 1906 earthquake in San Francisco, which destroyed more than half the city. When the U.S. Federal Building in Oklahoma City was bombed, data as well as on-site backups were destroyed. Since data losses and system unavailability resulting from a disaster cripple the operation of an organization, federal legislation now requires the development of recovery plans [5]. Extensive backup procedures have been developed to protect against data losses during disasters, such as the grandfather-father-son backup procedure, the incremental logging technique, and the data image dumping method. In addition to guarding against data losses, a system must also provide its normal services after a disaster strikes. Thus, as with data, computer hardware must also be replicated. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1.007631 | 0.008964 | 0.006729 | 0.005635 | 0.003209 | 0.002431 | 0.001493 | 0.000675 | 0.000117 | 0.000022 | 0.000001 | 0 | 0 | 0 |
A New Approach to Tractable Planning We describe a restricted class of planning problems and poly- nomial time membership and plan existence decision algo- rithms for this class. The definition of the problem class is based on a graph representation of planning problems, similar to Petri nets, and the use of a graph grammar to characterise a subset of such graphs. Thus, testing membership in the class is a graph parsing problem. The planning algorithm also ex- ploits this connection, making use of the parse tree. We show that the new problem class is incomparable with, i.e., nei- ther a subset nor a superset of, previously known classes of tractable planning problems. for solving problems in the class. Plan existence is decided by bottom-up label-propagation over the parse tree, similar in spirit to the algorithm for tree-shaped CSPs. Although it may be less transparent than restrictions on syntax or the causal graph, the use of a graph grammar has the important advantage of allowing us to explore novel classes of restrictions, that can not be formulated in those terms. To demonstrate this potential, we design a new tractable problem class, with the explicit aim of making it distinct from previously known tractable classes. The graph representation of planning problems we use is closely re- lated to (and, indeed, strongly inspired by) the Petri net formalism, so our results are easily related also to known tractable classes of Petri nets. The class we define is novel also with respect to them. | Act Local, Think Global: Width Notions for Tractable Planning Many of the benchmark domains in AI planning are tractable on an individual basis. In this paper, we seek a theoretical, domain-independent explanation for their tractability. We present a family of structural conditions that both imply tractability and capture some of the es- tablished benchmark domains. These structural condi- tions are, roughly speaking, based on measures of how many variables need to be changed in order to move a state closer to a goal state. | Solving Simple Planning Problems with More Inference and No Search Many benchmark domains in AI planning including Blocks, Logistics, Gripper, Satellite, and others lack the interactions that characterize puzzles and can be solved non-optimally in low polynomial time. They are indeed easy problems for people, although as with many other problems in AI, not always easy for machines. In this paper, we address the question of whether simple problems such as these can be solved in a simple way, i.e., without search, by means of a domain-independent planner. We address this question empirically by extending the constraint-based planner CPT with additional domain-independent inference mechanisms. We show then for the first time that these and several other benchmark domains can be solved with no backtracks while performing only polynomial node operations. This is a remarkable finding in our view that suggests that the classes of problems that are solvable without search may be actually much broader than the classes that have been identified so far by work in Tractable Planning. | Tractable planning with state variables by exploiting structural restrictions So far, tractable planning problems reported in the literature have beendefined by syntactical restrictions. To better exploit the inherent structurein problems, however, it is probably necessary to study also structuralrestrictions on the state-transition graph. Such restrictions aretypically computationally hard to test, though, since this graph is ofexponential size. We take an intermediate approach, using a statevariablemodel for planning and restricting the state-transition graph... | Causal graphs and structurally restricted planning The causal graph is a directed graph that describes the variable dependencies present in a planning instance. A number of papers have studied the causal graph in both practical and theoretical settings. In this work, we systematically study the complexity of planning restricted by the causal graph. In particular, any set of causal graphs gives rise to a subcase of the planning problem. We give a complete classification theorem on causal graphs, showing that a set of graphs is either polynomial-time tractable, or is not polynomial-time tractable unless an established complexity-theoretic assumption fails; our theorem describes which graph sets correspond to each of the two cases. We also give a classification theorem for the case of reversible planning, and discuss the general direction of structurally restricted planning. | Complexity results for standard benchmark domains in planning The efficiency of AI planning systems is usually evaluated empirically. For the validity of conclusions drawn from such empirical data, the problem set used for evaluation is of critical importance. In planning, this problem set usually, or at least often, consists of tasks from the various planning domains used in the first two international planning competitions, hosted at the 1998 and 2000 AIPS conferences. It is thus surprising that comparatively little is known about the properties of these benchmark domains, with the exception of BLOCKSWORLD, which has been studied extensively by several research groups.In this contribution, we try to remedy this fact by providing a map of the computational complexity of non-optimal and optimal planning for the set of domains used in the competitions. We identify a common transportation theme shared by the majority of the benchmarks and use this observation to define and analyze a general transportation problem that generalizes planning in several classical domains such as LOGISTICS, MYSTERY and GRIPPER. We then apply the results of that analysis to the actual transportation domains from the competitions. We next examine the remaining benchmarks, which do not exhibit a strong transportation feature, namely SCHEDULE and FREECELL.Relating the results of our analysis to empirical work on the behavior of the recently very successful FF planning system, we observe that our theoretical results coincide well with data obtained from empirical investigations. | Bounded Branching and Modalities in Non-Deterministic Planning. We study the consequences on complexity that arise whenbounds on the number of branch points on the solutions fornon-deterministic planning problems are imposed as well aswhen modal formulae are introduced into the description language.New planning tasks, such as whether there exists aplan with at most k branch points for a fully (or partially)observable non-deterministic domain, and whether there existsa no-branch (a.k.a. conformant) plan for partially observabledomains, are introduced and their complexity analyzed.Among other things, we show that deciding the existenceof a conformant plan for partially observable domains withmodal formulae is 2EXPSPACE-complete, and that the problemof deciding the existence of plans with bounded branching,for fully or partially observable contingent domains,has the same complexity of the conformant task. These resultsgeneralize previous results on the complexity of nondeterministicplanning and fill a slot that has gone unnoticedin non-deterministic planning, that of conformant planningfor partially observable domains. | On the size of reactive plans One of the most widespread approaches to reactive planning is Schoppers' universal plans. We propose a stricter definition of universal plans which guarantees a weak notion of soundness not present in the original definition. Furthermore, we isolate three different types of completeness which capture different behaviours exhibited by universal plans. We show that universal plans which run in polynomial time and are of polynomial size cannot satisfy even the weakest type of completeness unless the polynomial hierarchy collapses. However, by relaxing either the polynomial time or the polynomial space requirement, the construction of universal plans satisfying the strongest type of completeness becomes trivial. | Planning by rewriting: efficiently generating high-quality plans Domain-independent planning is a hard combinatorial problem. Taking into account plan quality makes the task even more difficult. We introduce a new paradigm for efficient high-quality planning that exploits plan rewriting rules and efficient local search techniques to transform an easy-to-generate, but possibly sub-optimal, initial plan into a low-cost plan. In addition to addressing the issues of efficiency and quality, this framework yields a new anytime planning algorithm. We have implemented this planner and applied it to several existing domains. The results show that this approach provides significant savings in planning effort while generating high-quality plans. | Constructing conditional plans by a theorem-prover The research on conditional planning rejects the assumptions that there is no uncertainty or incompleteness of knowledge with respect to the state and changes of the system the plans operate on. Without these assumptions the sequences of operations that achieve the goals depend on the initial state and the outcomes of nondeterministic changes in the system. This setting raises the questions of how to represent the plans and how to perform plan search. The answers are quite different from those in the simpler classical framework. In this paper, we approach conditional planning from a new viewpoint that is motivated by the use of satisfiability algorithms in classical planning. Translating conditional planning to formulae in the propositional logic is not feasible because of inherent computational limitations. Instead, we translate conditional planning to quantified Boolean formulae. We discuss three formalizations of conditional planning as quantified Boolean formulae, and present experimental results obtained with a theorem-prover. | Towards a Symmetric Treatment of Satisfaction and Conflicts in Quantified Boolean Formula Evaluation In this paper, we describe a new framework for evaluating Quantified Boolean Formulas (QBF). The new framework is based on the Davis-Putnam (DPLL) search algorithm. In existing DPLL based QBF algorithms, the problem database is represented in Conjunctive Normal Form (CNF) as a set of clauses, implications are generated from these clauses, and backtracking in the search tree is chronological. In this work, we augment the basic DPLL algorithm with conflict driven learning as well as satisfiability directed implication and learning. In addition to the traditional clause database, we add a cube database to the data structure. We show that cubes can be used to generate satisfiability directed implications similar to conflict directed implications generated by the clauses. We show that in a QBF setting, conflicting leaves and satisfying leaves of the search tree both provide valuable information to the solver in a symmetric way. We have implemented our algorithm in the new QBF solver Quaffle. Experimental results show that for some test cases, satisfiability directed implication and learning significantly prunes the search. | Distributed Storage Codes With Repair-by-Transfer and Nonachievability of Interior Points on the Storage-Bandwidth Tradeoff Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of $k$ nodes within the $n$ -node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of $d$ nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when $d=n-1$ . This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as “helper node pooling,” and show that it is the necessity to satisfy such scenarios that overconstrains the system. | Concurrent Updates on Striped Data Streams in Clustered Server Systems | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.101141 | 0.102283 | 0.033792 | 0.011459 | 0.009202 | 0.003538 | 0.000619 | 0.000179 | 0.000049 | 0.000001 | 0 | 0 | 0 | 0 |
Efficient cooperative backup with decentralized trust management Existing backup systems are unsatisfactory: commercial backup services are reliable but expensive while peer-to-peer systems are cheap but offer limited assurance of data reliability. This article introduces Friendstore, a system that provides inexpensive and reliable backup by giving users the choice to store backup data only on nodes they trust (typically those owned by friends and colleagues). Because it is built on trusted nodes, Friendstore is not burdened by the complexity required to cope with potentially malicious participants. Friendstore only needs to detect and repair accidental data loss and to ensure balanced storage exchange. The disadvantage of using only trusted nodes is that Friendstore cannot achieve perfect storage utilization. Friendstore is designed for a heterogeneous environment where nodes have very different access link speeds and available disk spaces. To ensure long-term data reliability, a node with limited upload bandwidth refrains from storing more data than its calculated maintainable capacity. A high bandwidth node might be limited by its available disk space. We introduce a simple coding scheme, called XOR(1,2), which doubles a node's ability to store backup information in the same amount of disk space at the cost of doubling the amount of data transferred during restore. Analysis and simulations using long-term node activity traces show that a node can reliably back up tens of gigabytes of data even with low upload bandwidth. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Adapting market-oriented scheduling policies for cloud computing Provisioning extra resources is necessary when the local resources are not sufficient to meet the user requirements Commercial Cloud providers offer the extra resources to users in an on demand manner and in exchange of a fee Therefore, scheduling policies are required that consider resources' prices as well as user's available budget and deadline Such scheduling policies are known as market-oriented scheduling policies However, existing market-oriented scheduling policies cannot be applied for Cloud providers because of the difference in the way Cloud providers charge users In this work, we propose two market-oriented scheduling policies that aim at satisfying the application deadline by extending the computational capacity of local resources via hiring resource from Cloud providers The policies do not have any prior knowledge about the application execution time The proposed policies are implemented in Gridbus broker as a user-level broker Results of the experiments achieved in real environments prove the usefulness of the proposed policies. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | Parameterized complexity for the database theorist | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
The complexity of deciding reachability properties of distributed negotiation schemes Distributed negotiation schemes offer one approach to agreeing an allocation of resources among a set of individual agents. Such schemes attempt to agree a distribution via a sequence of locally agreed 'deals'-reallocations of resources among the agents-ending when the result satisfies some accepted criteria. Our aim in this article is to demonstrate that some natural decision questions arising in such settings can be computationally significantly harder than questions related to optimal clearing strategies in combinatorial auctions. In particular we prove that the problem of deciding whether it is possible to progress from a given initial allocation to some desired final allocation via a sequence of ''rational'' steps is pspace-complete. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
DeepMirTar: a deep-learning approach for predicting human miRNA targets. Motivation: MicroRNAs (miRNAs) are small non-coding RNAs that function in RNA silencing and post-transcriptional regulation of gene expression by targeting messenger RNAs (mRNAs). Because the underlying mechanisms associated with miRNA binding to mRNA are not fully understood, a major challenge of miRNA studies involves the identification of miRNA-target sites on mRNA. In silico prediction of miRNA-target sites can expedite costly and time-consuming experimental work by providing the most promising miRNA-target-site candidates. Results: In this study, we reported the design and implementation of DeepMir Tar, a deep-learning-based approach for accurately predicting human miRNA targets at the site level. The predicted miRNA-target sites are those having canonical or non-canonical seed, and features, including highlevel expert-designed, low-level expert-designed and raw-data-level, were used to represent the miRNA-target site. Comparison with other state-of-the-art machine-learning methods and existing miRNA-target-prediction tools indicated that DeepMir Tar improved overall predictive performance. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A Completeness Result for SLDNF-Resolution Because of the possibility of floundering and infinite derivations, SLDNF-resolution is, in general, not complete. The classical approach [17] to get a completeness result is to restrict the attention to normal programs P and normal goals G, such that P or {G} is allowed and P is hierarchical. Unfortunately, the class of all normal programs and all normal goals meeting these requirements is not powerful enough to be of great practical importance. But after refining the concept of allowedness by taking modes [12] into account, we can broaden the notion of a hierarchical program, and thereby define a subclass of the class of normal programs and normal goals which is powerful enough to compute all primitive recursive functions without losing the completeness of SLDNF-resolution. | A new definition of SLDNF-resolution We propose a new, "top-down" definition of SLDNF-resolution which retains the spiritof the original definition but avoids the difficulties noted in the literature. We compare itwith the "bottom-up" definition of Kunen [Kun89].1 The problemThe notion of SLD-resolution of Kowalski [Kow74] allows us to resolve only positive literals.As a result it is not adequate to compute with general programs. Clark [Cla79] proposed toincorporate the negation as finite failure rule. This leads to an... | Proving Termination of General Prolog Programs We study here termination of general logic programs with the Prolog selection rule. To this end we extend the approach of Apt and Pedreschi [AP90] and consider the class of left terminating general programs. These are general logic programs that terminate with the Prolog selection rule for all ground goals. We introduce the notion of an acceptable program and prove that acceptable programs are left terminating. This provides us with a practical method of proving termination. | Expanding queries to incomplete databases by interpolating general logic programs In databases, queries are usually defined on complete databases. In this paper we introduce and motivate the notion of extended queries that are defined on incomplete databases. We argue that the language of extended logic program is appropriate for representing extended queries. We show through examples that given a query, a particular extension of it has important characteristics which corresponds to removal of the CWA from the original specification of the query. We refer to this particular extension as the expansion of the original query. Normally queries are expressed as general logic programs. We develop an algorithm that given a general logic program (satisfying certain syntactic properties) expressing a query constructs an extended logic program that expresses the expanded query. The extended logic program is referred to as the interpolation of the given general logic program. | Efficient top-down computation of queries under the well-founded semantics The well-founded model provides a natural and robust semantics for logic programs with negative literals in rule bodies. Although various procedural semantics have been proposed for query evaluation under the well-founded semantics, the practical issues of implementation for effective and efficient computation of queries have been rarely discussed. | A monotonicity theorem for extended logic programs Because general and extended logic programs behave nonmonotonically, itis in general difficult to predict how even minor changes to such programswill affect their meanings. This paper shows that for a restricted class ofextended logic programs --- those with signings --- it is possible to state afairly general theorem comparing the entailments of programs. To this end,we generalize (to the class of extended logic programs) the definition of asigning, first formulated by Kunen for general ... | Two components of an action language Some of the recent work on representing action makes use of high‐level action languages. In this paper we show that an action language can be represented as the sum of two distinct parts: an “action description language” and an “action query language.” A set of propositions in an action description language describes the effects of actions on states. Mathematically, it defines a transition system of the kind familiar from the theory of finite automata. An action query language serves for expressing properties of paths in a given transition system. We define the general concepts of a transition system, of an action description language and of an action query language, give a series of examples of languages of both kinds, and show how to combine a description language and a query language into one. This construction makes it possible to design the two components of an action language independently, which leads to the simplification and clarification of the theory of actions. | Logic programs with classical negation | Defeasible specifications in action theories In order to rank the performance of machine learning algorithms, many researchers conduct experiments on benchmark data sets. Since most learning algorithms have domain-specific parameters, it is a popular custom to adapt these parameters to obtain a ... | Massively Parallel Reasoning about Actions . In [2] C. Baral and M. Gelfond present the language AC forrepresenting concurrent actions in dynamic systems, and give a sound butincomplete encoding of this language in terms of extended logic programming.Using their program, the time the computation of the transitionfrom one situation to another takes increases quadraticly with the size ofthe considered domain. In this paper, we present a mapping of domaindescriptions in AC into neural networks of linear size. These networkstake only ... | On the facial structure of set packing polyhedra In this paper we address ourselves to identifying facets of the set packing polyhedron, i.e., of the convex hull of integer solutions to the set covering problem with equality constraints and/or constraints of the form “?”. This is done by using the equivalent node-packing problem derived from the intersection graph associated with the problem under consideration. First, we show that the cliques of the intersection graph provide a first set of facets for the polyhedron in question. Second, it is shown that the cycles without chords of odd length of the intersection graph give rise to a further set of facets. A rather strong geometric property of this set of facets is exhibited. | Heuristics for Scheduling I/O Operations The I/O bottleneck in parallel computer systems has recently begun receiving increasing interest. Most attention has focused on improving the performance of I/O devices using fairly low-level parallelism in techniques such as disk striping and interleaving. Widely applicable solutions, however, will require an integrated approach which addresses the problem at multiple system levels, including applications, systems software, and architecture. We propose that within the context of such an integrated approach, scheduling parallel I/O operations will become increasingly attractive and can potentially provide substantial performance benefits.We describe a simple I/O scheduling problem and present approximate algorithms for its solution. The costs of using these algorithms in terms of execution time, and the benefits in terms of reduced time to complete a batch of I/O operations, are compared with the situations in which no scheduling is used, and in which an optimal scheduling algorithm is used. The comparison is performed both theoretically and experimentally. We have found that, in exchange for a small execution time overhead, the approximate scheduling algorithms can provide substantial improvements in I/O completion times. | Database-aware semantically-smart storage Years of innovation in file systems have been highly successful in improving their performance and functionality, but at the cost of complicating their interaction with the disk. A variety of techniques exist to ensure consistency and integrity of file ... | Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture. | 1.030236 | 0.029011 | 0.020876 | 0.015338 | 0.011412 | 0.006 | 0.002581 | 0.000905 | 0.000083 | 0.000012 | 0 | 0 | 0 | 0 |
Learning for quantified boolean logic satisfiability Learning, i.e., the ability to record and exploit some information which is unveiled during the search, proved to be a very effective AI technique for problem solving and, in particular, for constraint satisfaction. We introduce learning as a general purpose technique to improve the performances of decision procedures for Quantified Boolean Formulas (QBFs). Since many of the recently proposed decision procedures for QBFs solve the formula using search methods, the addition of learning to such procedures has the potential of reducing useless explorations of the search space. To show the applicability of learning for QBF satisfiability we have implemented it in QUBE, a state-of-the-art QBF solver. While the backjumping engine embedded in QUBE provides a good starting point for our task, the addition of learning required us to devise new data structures and led to the definition and implementation of new pruning strategies. We report some experimental results that witness the effectiveness of learning. Noticeably, QUBE augmented with learning is able to solve instances that were previously out if its reach. To the extent of our knowledge, this is the first time that learning is proposed, implemented and tested for QBFs satisfiability. | Monotone Literals and Learning in QBF Reasoning Monotone literal fixing (MLF) and learning are well-known lookahead and lookback mechanisms in propositional satisfiability (SAT). When considering Quantified Boolean Formulas (QBFs), their separate implementation leads to significant speed-ups in state-of-the-art DPLL-based solvers. This paper is dedicated to the efficient implementation of MLF in a QBF solver with learning. The interaction between MLF and learning is far from being obvious, and it poses some nontrivial questions about both the detection and the propagation of monotone literals. Complications arise from the presence of learned constraints, and are related to the question about whether learned constraints have to be considered or not during the detection and/or propagation of monotone literals. In the paper we answer to this question both from a theoretical and from a practical point of view. We discuss the advantages and the disadvantages of various solutions, and show that our solution of choice, implemented in our solver QUBE, produces significant speed-ups in most cases. Finally, we show that MLF can be fundamental also for solving some SAT instances, taken from the 2002 SAT solvers competition. | QBF Reasoning on Real-World Instances During the recent years, the development of tools for deciding Quantified Boolean Formu- las (QBFs) has been accompanied by a steady supply of real-world instances, i.e., QBFs originated by translations from application domains. Instances of this kind showed to be challenging for current state-of-the-art QBF solvers, while the ability to deal effectively with them is necessary to foster adop- tion of QBF-based reasoning in practice. In this paper we describe three reasoning techniques that we implemented in our solver QUBE++ to increase its performances on real-world instances coming from formal verification and planning domains. We present experimental results that witness the contribu- tion of each technique and the better performances of QUBE++ with respect to other state-of-the-art QBF solvers. The effectiveness of QUBE++ is further confirmed by experiments run on challenging real-world SAT instances, where QUBE++ turns out to be competitive with respect to current state-of- the-art SAT solvers. | Solving quantified boolean formulas with circuit observability don't cares Traditionally the propositional part of a Quantified Boolean Formula (QBF) instance has been represented using a conjunctive normal form (CNF). As with propositional satisfiability (SAT), this is motivated by the efficiency of this data structure. However, in many cases, part of or the entire propositional part of a QBF instance can often be represented as a combinational logic circuit. In a logic circuit, the limited observability of the internal signals at the circuit outputs may make their assignments irrelevant for specific assignments of values to other signals in the circuit. This circuit observability don't care (ODC) information has been used to advantage in circuit based SAT solvers. A CNF encoding of the circuit, however, does not capture the signal direction and this limited observability, and thus cannot directly take advantage of this. However, recently it has been shown that this don't care information can be encoded in the CNF description and taken advantage of in a DPLL based SAT solver by modifying the decision heuristics/Boolean constraint propagation/conflict-driven-learning to account for these don't cares. Thus far, however, the use of these don't cares in the CNF encoding has not been explored for QBF solvers. In this paper, we examine how this can be done for QBF solvers as well as evaluate its practical benefits through experimentation. We have developed and implemented the usage of circuit ODCs in various parts of the DPLL-based procedure of the Quaffle QBF solver. We show that DPLL search based QBF solvers can use circuit ODC information to detect satisfying branches earlier during search and make satisfiability directed learning more effective. Our experiments demonstrate that significant performance gain can be obtained by considering circuit ODCs in checking the satisfiability of QBFs. | An Effective Algorithm for the Futile Questioning Problem In the futile questioning problem, one must decide whether acquisition of additional information can possibly lead to the proof of a conclusion. Solution of that problem demands evaluation of a quantified Boolean formula at the second level of the polynomial hierarchy. The same evaluation problem, called Q-ALL SAT, arises in many other applications. In this paper, we introduce a special subclass of Q-ALL SAT that is at the first level of the polynomial hierarchy. We develop a solution algorithm for the general case that uses a backtracking search and a new form of learning of clauses. Results are reported for two sets of instances involving a robot route problem and a game problem. For these instances, the algorithm is substantially faster than state-of-the-art solvers for quantified Boolean formulas. | SAT-based planning in complex domains: concurrency, constraints and nondeterminism Planning as satisfiability is a very efficient technique for classical planning, i.e., for planning domains in which both the effects of actions and the initial state are completely specified. In this paper we present C-SAT, a SAT-based procedure capable of dealing with planning domains having incomplete information about the initial state, and whose underlying transition system is specified using the highly expressive action language C. Thus, C-SAT allows for planning in domains involving (i) actions which can be executed concurrently; (ii) (ramification and qualification) constraints affecting the effects of actions; and (iii) nondeterminism in the initial state and in the effects of actions. We first prove the correctness and the completeness of C-SAT, discuss some optimizations, and then we present C-PLAN, a system based on C-SAT. C-PLAN works on any C planning problem, but some optimizations have not been fully implemented yet. Nevertheless, the experimental analysis shows that SAT-based approaches to planning with incomplete information are viable, at least in the case of problems with a high degree of parallelism. | Bounded universal expansion for preprocessing QBF We present a new approach for preprocessing Quantified Boolean Formulas (QBF) in conjunctive normal form (CNF) by expanding a selection of universally quantified variables with bounded expansion costs. We describe a suitable selection strategy which exploits locality of universals and combines cost estimates with goal orientation by taking into account unit literals which might be obtained. Furthermore, we investigate how Q-resolution can be integrated into this method. In particular, resolution is applied specifically to reduce the amount of copying necessary for universal expansion. Experimental results demonstrate that our preprocessing can successfully improve the performance of state-of-the-art QBF solvers on wellknown problems from the QBFLIB collection. | Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in SAT-Based Planning In Verification and in (optimal) AI Planning, a successful method is to formulate the application as boolean satisfiability ( SAT), and solve it with state-of-the-art DPLL-based procedures. There is a lack of understanding of why this works so well. Focussing on the Planning context, we identify a form of problem structure concerned with the symmetrical or asymmetrical nature of the cost of achieving the individual planning goals. We quantify this sort of structure with a simple numeric parameter called AsymRatio, ranging between 0 and 1. We run experiments in 10 benchmark domains from the International Planning Competitions since 2000; we show that AsymRatio is a good indicator of SAT solver performance in 8 of these domains. We then examine carefully crafted synthetic planning domains that allow control of the amount of structure, and that are clean enough for a rigorous analysis of the combinatorial search space. The domains are parameterized by size, and by the amount of structure. The CNFs we examine are unsatisfiable, encoding one planning step less than the length of the optimal plan. We prove upper and lower bounds on the size of the best possible DPLL refutations, under different settings of the amount of structure, as a function of size. We also identify the best possible sets of branching variables (backdoors). With minimum AsymRatio, we prove exponential lower bounds, and identify minimal backdoors of size linear in the number of variables. With maximum AsymRatio, we identify logarithmic DPLL refutations ( and backdoors), showing a doubly exponential gap between the two structural extreme cases. The reasons for this behavior - the proof arguments - illuminate the prototypical patterns of structure causing the empirical behavior observed in the competition benchmarks. | Approximate reasoning about actions in presence of sensing and incomplete information Sensing actions are important for planning with incomplete information.A solution for the frame problem for sensing actions was proposed by Scherland Levesque. They adapt the possible world model of knowledge to situationcalculus. In this paper we propose a high level language in the spiritof the language A, that allows sensing actions. We then present two approximationsemantics of this language and their translation to logic programs.Unlike, A, where states are two valued... | The frame problem in situation the calculus: a simple solution (sometimes) and a completeness result for goal regression | Dynamic Multi-Resource Load Balancing in Parallel Database Systems | Equivalence notions and model minimization in Markov decision processes Many stochastic planning problems can be represented using Markov Decision Processes (MDPs). A difficulty with using these MDP representations is that the common algorithms for solving them run in time polynomial in the size of the state space, where this size is extremely large for most real-world planning problems of interest. Recent AI research has addressed this problem by representing the MDP in a factored form. Factored MDPs, however, are not amenable to traditional solution methods that call for an explicit enumeration of the state space. One familiar way to solve MDP problems with very large state spaces is to form a reduced (or aggregated) MDP with the same properties as the original MDP by combining "equivalent" states. In this paper, we discuss applying this approach to solving factored MDP problems--we avoid enumerating the state space by describing large blocks of "equivalent" states in factored form, with the block descriptions being inferred directly from the original factored representation. The resulting reduced MDP may have exponentially fewer states than the original factored MDP, and can then be solved using traditional methods. The reduced MDP found depends on the notion of equivalence between states used in the aggregation. The notion of equivalence chosen will be fundamental in designing and analyzing algorithms for reducing MDPs. Optimally, these algorithms will be able to find the smallest possible reduced MDP for any given input MDP and notion of equivalence (i.e., find the "minimal model" for the input MDP). Unfortunately, the classic notion of state equivalence from non-deterministic finite state machines generalized to MDPs does not prove useful. We present here a notion of equivalence that is based upon the notion of bisimulation from the literature on concurrent processes. Our generalization of bisimulation to stochastic processes yields a non-trivial notion of state equivalence that guarantees the optimal policy for the reduced model immediately induces a corresponding Optimal policy for the original model. With this notion of state equivalence, we design and analyze an algorithm that minimizes arbitrary factored MDPs and compare this method analytically to previous algorithms for solving factored MDPs. We show that previous approaches implicitly derive equivalence relations that we define here. | An efficient scheme for providing high availability Replication at the partition level is a promising approach for increasing availability in a Shared Nothing architecture. We propose an algorithm for maintaining replicas with little overhead during normal failure-free processing. Our mechanism updates the secondary replica in an asynchronous manner: entire dirty pages are sent to the secondary at some time before they are discarded from primary's buffer. A log server node (hardened against failures) maintains the log for each node. If a primary node fails, the secondary fetches the log from the log server, applied it to its replica, and brings itself to the primary's last transaction-consistent state. We study the performance of various policies for sending pages to secondary and the corresponding trade-offs between recovery time and overhead during failure-free processing. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.010693 | 0.013771 | 0.012129 | 0.009461 | 0.006821 | 0.004334 | 0.002155 | 0.000258 | 0.000024 | 0.000002 | 0 | 0 | 0 | 0 |
Structural Deep Network Embedding Network embedding is an important method to learn low-dimensional representations of vertexes in networks, aiming to capture and preserve the network structure. Almost all the existing network embedding methods adopt shallow models. However, since the underlying network structure is complex, shallow models cannot capture the highly non-linear network structure, resulting in sub-optimal network representations. Therefore, how to find a method that is able to effectively capture the highly non-linear network structure and preserve the global and local structure is an open yet important problem. To solve this problem, in this paper we propose a Structural Deep Network Embedding method, namely SDNE. More specifically, we first propose a semi-supervised deep model, which has multiple layers of non-linear functions, thereby being able to capture the highly non-linear network structure. Then we propose to exploit the first-order and second-order proximity jointly to preserve the network structure. The second-order proximity is used by the unsupervised component to capture the global network structure. While the first-order proximity is used as the supervised information in the supervised component to preserve the local network structure. By jointly optimizing them in the semi-supervised deep model, our method can preserve both the local and global network structure and is robust to sparse networks. Empirically, we conduct the experiments on five real-world networks, including a language network, a citation network and three social networks. The results show that compared to the baselines, our method can reconstruct the original network significantly better and achieves substantial gains in three applications, i.e. multi-label classification, link prediction and visualization. | Task-Guided and Path-Augmented Heterogeneous Network Embedding for Author Identification. In this paper, we study the problem of author identification under double-blind review setting, which is to identify potential authors given information of an anonymized paper. Different from existing approaches that rely heavily on feature engineering, we propose to use network embedding approach to address the problem, which can automatically represent nodes into lower dimensional feature vectors. However, there are two major limitations in recent studies on network embedding: (1) they are usually general-purpose embedding methods, which are independent of the specific tasks; and (2) most of these approaches can only deal with homogeneous networks, where the heterogeneity of the network is ignored. Hence, challenges faced here are two folds: (1) how to embed the network under the guidance of the author identification task, and (2) how to select the best type of information due to the heterogeneity of the network. To address the challenges, we propose a task-guided and path-augmented heterogeneous network embedding model. In our model, nodes are first embedded as vectors in latent feature space. Embeddings are then shared and jointly trained according to task-specific and network-general objectives. We extend the existing unsupervised network embedding to incorporate meta paths in heterogeneous networks, and select paths according to the specific task. The guidance from author identification task for network embedding is provided both explicitly in joint training and implicitly during meta path selection. Our experiments demonstrate that by using path-augmented network embedding with task guidance, our model can obtain significantly better accuracy at identifying the true authors comparing to existing methods. | A practical tutorial on autoencoders for nonlinear feature fusion: Taxonomy, models, software and guidelines. •Autoencoders are a growing family of tools for nonlinear feature fusion.•A taxonomy of these methods is proposed, detailing each one of them.•Comparisons to other feature fusion techniques and applications are studied.•Guidelines on autoencoder design and example results are provided.•Available software for building autoencoders is summarized. | Deep learning via semi-supervised embedding We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Empirical Analysis of Predictive Algorithms for Collaborative Filtering Collaborative filtering or recommender systemsuse a database about user preferences topredict additional topics or products a newuser might like. In this paper we describe several algorithms designed for this task, including techniques based on correlation coefficients,vector-based similarity calculations,and statistical Bayesian methods. We comparethe predictive accuracy of the various methods in a set of representative problemdomains. We use two basic classes of evaluation... | Predicting individual disease risk based on medical history The monumental cost of health care, especially for chronic disease treatment, is quickly becoming unmanageable. This crisis has motivated the drive towards preventative medicine, where the primary concern is recognizing disease risk and taking action at the earliest signs. However, universal testing is neither time nor cost efficient. We propose CARE, a Collaborative Assessment and Recommendation Engine, which relies only on a patient's medical history using ICD-9-CM codes in order to predict future diseases risks. CARE uses collaborative filtering to predict each patient's greatest disease risks based on their own medical history and that of similar patients. We also describe an Iterative version, ICARE, which incorporates ensemble concepts for improved performance. These novel systems require no specialized information and provide predictions for medical conditions of all kinds in a single run. We present experimental results on a Medicare dataset, demonstrating that CARE and ICARE perform well at capturing future disease risks. | Real-time multimedia systems The expansion of multimedia networks and systems depends on real-time support for media streams and interactive multimedia services. Multimedia data are essentially continuous, heterogeneous, and isochronous, three characteristics with strong real-time implications when combined. At the same time, some multimedia services, like video-on-demand or distributed simulation, are real-time applications with sophisticated temporal functionalities in their user interface. We analyze the main problems in building such real-time multimedia systems, and we discuss-under an architectural prospect-some technological solutions especially those regarding determinism and efficient synchronization in the storage, processing, and communication of audio and video data | NP is as easy as detecting unique solutions For all known NP-complete problems the number of solutions in instances having solutions may vary over an exponentially large range. Furthermore, most of the well-known ones, such as satisfiability, are parsimoniously interreducible, and these can have any number of solutions between zero and an exponentially large number. It is natural to ask whether the inherent intractability of NP-complete problems is caused by this wide variation. In this paper we give a negative answer to this using randomized reductions. We show that the problems of distinguishing between instances of SAT having zero or one solution, or finding solutions to instances of SAT having unique solutions, are as hard as SAT itself. Several corollaries about the difficulty of specific problems follow. For example if the parity of the number of solutions of SAT can be computed in RP then NP = RP. Some further problems can be shown to be hard for NP or DP via randomized reductions. | Downward Separation Fails Catastrophically for Limited Nondeterminism Classes The $\beta$ hierarchy consists of classes $\beta_k={\rm NP}[logkn]\subseteq {\rm NP}$. Unlike collapses in the polynomial hierarchy and the Boolean hierarchy, collapses in the $\beta$ hierarchy do not seem to translate up, nor does closure under complement seem to cause the hierarchy to collapse. For any consistent set of collapses and separations of levels of the hierarchy that respects ${\rm P} = \beta_1\subseteq \beta_2\subseteq \cdots \subseteq {\rm NP}$, we can construct an oracle relative to which those collapses and separations hold; at the same time we can make distinct levels of the hierarchy closed under computation or not, as we wish. To give two relatively tame examples: for any $k \geq 1$, we construct an oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} \neq \beta_{k+2} \neq \cdots \] and another oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} = {\rm PSPACE}. \] We also construct an oracle relative to which $\beta_{2k} = \beta_{2k+1} \neq \beta_{2k+2}$ for all k. | Normal forms for answer sets programming Normal forms for logic programs under stable/answer set semantics are introduced. We argue that these forms can simplify the study of program properties, mainly consistency. The first normal form, called the kernel of the program, is useful for studying existence and number of answer sets. A kernel program is composed of the atoms which are undefined in the Well-founded semantics, which are those that directly affect the existence of answer sets. The body of rules is composed of negative literals only. Thus, the kernel form tends to be significantly more compact than other formulations. Also, it is possible to check consistency of kernel programs in terms of colorings of the Extended Dependency Graph program representation which we previously developed. The second normal form is called 3-kernel. A 3-kernel program is composed of the atoms which are undefined in the Well-founded semantics. Rules in 3-kernel programs have at most two conditions, and each rule either belongs to a cycle, or defines a connection between cycles. 3-kernel programs may have positive conditions. The 3-kernel normal form is very useful for the static analysis of program consistency, i.e. the syntactic characterization of existence of answer sets. This result can be obtained thanks to a novel graph-like representation of programs, called Cycle Graph which presented in the companion article Costantini (2004b). | A cost-benefit scheme for high performance predictive prefetching | Representing the Process Semantics in the Event Calculus In this paper we shall present a translation of the process semantics [5] to the event calculus. The aim is to realize a method of integrating high-level semantics with logical calculi to reason about continuous change. The general translation rules and the soundness and completeness theorem of the event calculus with respect to the process semantics are main technical results of this paper. | Improving Citation Polarity Classification With Product Reviews Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification. | 1.04 | 0.04 | 0.02 | 0.000769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A trace-driven comparison of algorithms for parallel prefetching and caching No abstract available. | I/O-Conscious Volume Rendering Most existing volume rendering algorithms assume that data sets are memory-resident and thus ignore the performance overhead of disk I/O. While this assumption may be true for high-performance graphics machines, it does not hold for most desktop personal workstations. To minimize the end-to-end volume rendering time, this work re-examines implementation strategies of the ray casting algorithm, taking into account both computation and I/O overheads. Specifically, we developed a data-driven execution model for ray casting that achieves the maximum overlap between rendering computation and disk I/O. Together with other performance optimizations, on a 300-MHz Pentium-II machine, without directional shading, our implementation is able to render a 128x128 greyscale image from a 128x128x128 data set with an average end-to-end delay of 1 second, which is very close to the memory-resident rendering time. With a little modification, this work can also be extended to do out-of-core visualization as well. | Intensive Data Management in Parallel Systems: A Survey In this paper we identify and discuss issues that arerelevant to the design and usage of databases handling massiveamounts of data in parallel environments. The issues that are tackledinclude the placement of the data in the memory, file systems,concurrent access to data, effects on query processing, and theimplications of specific machine architectures. Since not allparameters are tractable in rigorous analysis, results of performanceand bench-marking studies are highlighted for several systems. | Minimizing Stall Time in Single and Parallel Disk Systems Using Multicommodity Network Flows We study integrated prefetching and caching in single and parallel disk systems. Arecen t approach used linear programming to solve the problem. We show that integrated prefetching and caching can also be formulated as a min-cost multicommodity flow problem and, exploiting special properties of our network, can be solved using combinatorial techniques. Moreover, for parallel disk systems, we develop improved approximation algorithms, trading performance guarantee for running time. If the number of disks is constant, we achieve a 2-approximation. | Matrix-Stripe-Cache-Based Contiguity Transform for Fragmented Writes in RAID-5 Given that contiguous reads and writes between a cache and a disk outperform fragmented reads and writes, fragmented reads and writes are forcefully transformed into contiguous reads and writes via a proposed matrix-stripe-cache-based contiguity transform (MSC-CT) method which employs a rule of consistency for data integrity at the block level and a rule of performance that ensures no performance degradation. MSC-CT performs for reads and writes, both of which are produced by write requests from a host as a write request from a host employs reads for parity update and writes to disks in a redundant array of independent disks (RAID)-5. MSC-CT is compatible with existing disk technologies. The proposed implementation in a Linux kernel delivers a peak throughput that is 3.2 times higher than a case without MSC-CT on representative workloads. The results demonstrate that MSC-CT is extremely simple to implement, has low overhead, and is ideally suited for RAID controllers not only for random writes but also for sequential writes in various realistic scenarios. | Using MEMS-based storage in disk arrays Current disk arrays, the basic building blocks of high-performance storage systems, are built around two memory technologies: magnetic disk drives, and non-volatile DRAM caches. Disk latencies are higher by six orders of magnitude than non-volatile DRAM access times, but cache costs over 1000 times more per byte. A new storage technology based on microelectromechanical systems (MEMS) will soon offer a new set of performance and cost characteristics that bridge the gap between disk drives and the caches. We evaluate potential gains in performance and cost by incorporating MEMS-based storage in disk arrays. Our evaluation is based on exploring potential placements of MEMS-based storage in a disk array. We used detailed disk array simulators to replay I/O traces of real applications for the evaluation. We show that replacing disks with MEMS-based storage can improve the array performance dramatically, with a cost performance ratio several times better than conventional arrays even if MEMS storage costs ten times as much as disk. We also demonstrate that hybrid MEMS/disk arrays, which cost less than purely MEMS-based arrays, can provide substantial improvements in performance and cost/performance over conventional arrays. | Managing prefetch memory for data-intensive online servers Years of innovation in file systems have been highly successful in improving their performance and functionality, but at the cost of complicating their interaction with the disk. A variety of techniques exist to ensure consistency and integrity of file ... | Informed prefetching of collective input/output requests | Measurements of a distributed file system We analyzed the user-level file access patterns and caching behavior of the Sprite distributed file system. The first part of our analysis repeated a study done in 1985 of the: BSD UNIX file system. We found that file throughput has increased by a factor of 20 to an average of 8 Kbytes per second per active user over 10-minute intervals, and that the use of process migration for load sharing increased burst rates by another factor of six. Also, many more very large (multi-megabyte) files are in use today than in 1985. The second part of our analysis measured the behavior of Sprite's main-memory file caches. Client-level caches average about 7 Mbytes in size (about one-quarter to one-third of main memory) and filter out about 50% of the traffic between clients and servers. 35% of the remaining server traffic is caused by paging, even on workstations with large memories. We found that client cache consistency is needed to prevent stale data errors, but that it is not invoked often enough to degrade overall system performance. | Caching Hints in Distributed Systems Caching reduces the average cost of retrieving data by amortizing the lookup cost over several references to the data. Problems with maintaining strong cache consistency in a distributed system can be avoided by treating cached information as hints. A new approach to managing caches of hints suggests maintaining a minimum level of cache accuracy, rather than maximizing the cache hit ratio, in order to guarantee performance improvements. The desired accuracy is based on the ratio of lookup costs to the costs of detecting and recovering from invalid cache entries. Cache entries are aged so that they get purged when their estimated accuracy falls below the desired level. The age thresholds are dictated solely by clients' accuracy requirements instead of being suggested by data storage servers or system administrators. | Automatic recovery from disk failure in continuous-media servers Continuous-media (CM) servers have been around for some years. Apart from server capacity, another important issue in the deployment of CM servers is reliability. This study investigates rebuild algorithms for automatically rebuilding data stored in a failed disk into a spare disk. Specifically, a block-based rebuild algorithm is studied with the rebuild time and buffer requirement modeled. A buffer-sharing scheme is then proposed to eliminate the additional buffers needed by the rebuild process. To further improve rebuild performance, a track-based rebuild algorithm that rebuilds lost data in tracks is proposed and analyzed. Results show that track-based rebuild, while it substantially outperforms block-based rebuild, requires significantly more buffers (17-135 percent more) even with buffer sharing. To tackle this problem, a novel pipelined rebuild algorithm is proposed to take advantage of the sequential property of track retrievals to pipeline the reading and writing processes. This pipelined rebuild algorithm achieves the same rebuild performance as track-based rebuild, but reduces the extra buffer requirement to insignificant levels (0.7-1.9 percent). Numerical results computed using models of five commercial disk drives demonstrate that automatic rebuild of a failed disk can be done in a reasonable amount of time, even at relatively high server utilization (e.g., less than 1.5 hours at 90 percent utilization). | Unambiguous Computation: Boolean Hierarchies and Sparse Turing-Complete Sets It is known that for any class $\tweak{\cal C}$ closed under union and intersection, the Boolean closure of ${\cal C}$, the Boolean hierarchy over $\tweak{\cal C}$, and the symmetric difference hierarchy over $\tweak{\cal C}$ all are equal. We prove that these equalities hold for any complexity class closed under intersection; in particular, they thus hold for unambiguous polynomial time (UP). In contrast to the NP case, we prove that the Hausdorff hierarchy and the nested difference hierarchy over UP both fail to capture the Boolean closure of UP in some relativized worlds. Karp and Lipton proved that if nondeterministic polynomial time has sparse Turing-complete sets, then the polynomial hierarchy collapses. We establish the first consequences from the assumption that unambiguous polynomial time has sparse Turing-complete sets: (a) $\up \seq \mbox{Low}_2$, where $\mbox{Low}_2$ is the second level of the low hierarchy, and (b) each level of the unambiguous polynomial hierarchy is contained one level lower in the promise unambiguous polynomial hierarchy than is otherwise known to be the case. | Learning to Reason About Actions We focus on learning representations of dynamical systems that can be characterized by logic-based formalisms for reasoning about actions and change, where system's behaviors are naturally viewed as appropriate logical consequences of the domain's description. To this end, logic-based induction methods are adapted to identify the input/output behavior of a dynamical system corresponding to an environment. The study of dynamic domains is started with domains modelable with classical action theories and is progressively enhanced to manage more complex behaviors. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1.004233 | 0.00639 | 0.00597 | 0.004238 | 0.003484 | 0.002649 | 0.001858 | 0.001163 | 0.000467 | 0.000063 | 0.000002 | 0 | 0 | 0 |
Pruning recurrent neural networks for improved generalization performance. Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic that significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that rules extracted from networks trained with this pruning heuristic are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state, triple-parity grammar. Further simulations indicate that this pruning method can have generalization performance superior to that obtained by training with weight decay. | Learning a class of large finite state machines with a recurrent neural network One of the issues in any learning model is how it scales with problem size. The problem of learning finite state machine (FSMs) from examples with recurrent neural networks has been extensively explored. However, these results are somewhat disappointing in the sense that the machines that can be learned are too small to be competitive with existing grammatical inference algorithms. We show that a type of recurrent neural network (Narendra & Parthasarathy, 1990, IEEE Trans. Neural Networks, 1 , 4–27) which has feedback but no hidden state neurons can learn a special type of FSM called a finite memory machine (FMM) under certain constraints. These machines have a large number of states (simulations are for 256 and 512 state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine. | Deep learning of the tissue-regulated splicing code. Motivation: Alternative splicing (AS) is a regulated process that directs the generation of different transcripts from single genes. A computational model that can accurately predict splicing patterns based on genomic features and cellular context is highly desirable, both in understanding this widespread phenomenon, and in exploring the effects of genetic variations on AS. Methods: Using a deep neural network, we developed a model inferred from mouse RNA-Seq data that can predict splicing patterns in individual tissues and differences in splicing patterns across tissues. Our architecture uses hidden variables that jointly represent features in genomic sequences and tissue types when making predictions. A graphics processing unit was used to greatly reduce the training time of our models with millions of parameters. Results: We show that the deep architecture surpasses the performance of the previous Bayesian method for predicting AS patterns. With the proper optimization procedure and selection of hyperparameters, we demonstrate that deep architectures can be beneficial, even with a moderately sparse dataset. An analysis of what the model has learned in terms of the genomic features is presented. | Generalization by weight-elimination with application to forecasting Inspired by the information theoretic idea of minimum description length, we add a term to the back propagation cost function that penalizes network complexity. We give the details of the procedure, called weight-elimination, describe its dynamics, and clarify the meaning of the parameters involved. From a Bayesian perspective, the complexity term can be usefully interpreted as an assumption about prior distribution of the weights. We use this procedure to predict the sunspot time series and the notoriously noisy series of currency exchange rates. | A survey of kernels for structured data Kernel methods in general and support vector machines in particular have been successful in various learning tasks on data represented in a single table. Much 'real-world' data, however, is structured - it has no natural representation in a single table. Usually, to apply kernel methods to 'real-world' data, extensive pre-processing is performed to embed the data into areal vector space and thus in a single table. This survey describes several approaches of defining positive definite kernels on structured instances directly. | On Discriminative vs. Generative Classifiers: A comparison of logistic regression and naive Bayes We compare discriminative and generative learning as typified by logistic regression and naive Bayes. We show, contrary to a widely-held belief that discriminative classifiers are almost always to be preferred, that there can often be two distinct regimes of performance as the training set size is increased, one in which each algorithm does better. This stems from the observation-which is borne out in repeated experiments-that while discriminative learning has lower asymptotic error, a generative classifier may also approach its (higher) asymptotic error much faster. | Estimation of Non-Normalized Statistical Models by Score Matching One often wants to estimate statistical models where the probability density function is known only up to a multiplicative normalization constant. Typically, one then has to resort to Markov Chain Monte Carlo methods, or approximations of the normalization constant. Here, we propose that such models can be estimated by minimizing the expected squared distance between the gradient of the log-density given by the model and the gradient of the log-density of the observed data. While the estimation of the gradient of log-density function is, in principle, a very di cult non-parametric problem, we prove a surprising result that gives a simple formula for this objective function. The density function of the observed data does not appear in this formula, which simpli es to a sample average of a sum of some derivatives of the log-density given by the model. The validity of the method is demonstrated on multivariate Gaussian and independent component analysis models, and by estimating an overcomplete lter set for natural image data. Keywords: statistical estimation, non-normalized densities, pseudo-likelihood, Markov chain Monte Carlo, contrastive divergence | A Theoretical Analysis of Robust Coding over Noisy Overcomplete Channels Biological sensory systems are faced with the problem of encoding a high-fidelity sensory signal with a population of noisy, low-fidelity neu- rons. This problem can be expressed in information theoretic terms as coding and transmitting a multi-dimensional, analog signal over a set of noisy channels. Previously, we have shown that robust, overcomplete codes can be learned by minimizing the reconstruction error with a con- straint on the channel capacity. Here, we present a theoretical analysis that characterizes the optimal linear coder and decoder for one- and two- dimensional data. The analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides a number of important insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions to achieve ro- bustness. We also report numerical solutions for robust coding of high- dimensional image data and show that these codes are substantially more robust compared against other image codes such as ICA and wavelets. | A Hierarchical Model Of Shape And Appearance For Human Action Classification We present a novel model for human action categorization. A video sequence is represented as a collection of spatial and spatial-temporal features by extracting static and dynamic interest points. We propose a hierarchical model that can be characterized as a constellation of bags-of-features and that is able to combine both spatial and spatial-temporal features. Given a novel video sequence, the model is able to categorize human actions in a frame-by-frame basis. We test the model on a publicly available human action dataset (2) and show that our new method performs well on the classification task. We also conducted control experiments to show that the use of the proposed mixture of hierarchical models improves the classification performance over bag of feature models. An additional experiment shows that using both dynamic and static features provides a richer representation of human actions when compared to the use of a single feature type, as demonstrated by our evaluation in the classification task. | Provably Difficult Combinatorial Games | Expressivity of STRIPS-Like and HTN-Like Planning It is widely believed, that the expressivity of STRIPS and STRIPS-like planning based on actions is generally lower than the expressivity of Hierarchical Task Network (HTN) and HTN-like planning, based on hierarchical decomposition. This would mean that a HTN-like planner can generally solve more domains than a STRIPS-like planner with the same extensions. In this paper, we show that both approaches, as they are practically used, are identically expressive and can solve all domains solvable by a Turing machine with finite tape (i.e. solvable by a common computer). | WSCLOCK—a simple and effective algorithm for virtual memory management A new virtual memory management algorithm WSCLOCK has been synthesized from the local working set (WS) algorithm, the global CLOCK algorithm, and a new load control mechanism for auxiliary memory access. The new algorithm combines the most useful feature of WS—a natural and effective load control that prevents thrashing—with the simplicity and efficiency of CLOCK. Studies are presented to show that the performance of WS and WSCLOCK are equivalent, even if the savings in overhead are ignored. | WOW: wise ordering for writes - combining spatial and temporal locality in non-volatile caches Write caches using fast, non-volatile storage are now widely used in modern storage controllers since they enable hiding latency on writes. Effective algorithms for write cache management are extremely important since (i) in RAID-5, due to read-modify-write and parity updates, each write may cause up to four separate disk seeks while a read miss causes only a single disk seek; and (ii) typically, write cache size is much smaller than the read cache size - a proportion of 1 : 16 is typical. A write caching policy must decide: what data to destage. On one hand, to exploit temporal locality, we would like to destage data that is least likely to be re-written soon with the goal of minimizing the total number of destages. This is normally achieved using a caching algorithm such as LRW (least recently written). However, a read cache has a very small uniform cost of replacing any data in the cache, whereas the cost of destaging depends on the state of the disk heads. Hence, on the other hand, to exploit spatial locality, we would like to destage writes so as to minimize the average cost of each destage. This can be achieved by using a disk scheduling algorithm such as CSCAN, that destages data in the ascending order of the logical addresses, at the higher level of the write cache in a storage controller. Observe that LRW and CSCAN focus, respectively, on exploiting either temporal or spatial locality, but not both simultaneously. We propose a new algorithm, namely, Wise Ordering for Writes (WOW), for write cache management that effectively combines and balances temporal and spatial locality. Our experimental set-up consisted of an IBM xSeries 345 dual processor server running Linux that is driving a (software) RAID-5 or RAID-10 array using a workload akin to Storage Performance Council's widely adopted SPC-1 benchmark. In a cache-sensitive configuration on RAID-5, WOW delivers peak throughput that is 129% higher than CSCAN and 9% higher than LRW. In a cache-insensitive configuration on RAID-5, WOW and CSCAN deliver peak throughput that is 50% higher than LRW. For a random write workload with nearly 100% misses, on RAID-10, with a cache size of 64K, 4KB pages (256MB), WOW and CSCAN deliver peak throughput that is 200% higher than LRW. In summary, WOW has better or comparable peak throughput to the best of CSCAN and LRW across a wide gamut of write cache sizes and workload configurations. In addition, even at lower throughputs, WOW has lower average response times than CSCAN and LRW. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.112 | 0.033333 | 0.004095 | 0.003031 | 0.000554 | 0.000214 | 0.000094 | 0.000039 | 0.000011 | 0 | 0 | 0 | 0 | 0 |
Condition-based maintenance of naval propulsion systems: Data analysis with minimal feedback. •Data-Driven models to investigate CBM on a ship propulsion system.•State-of-the-art supervised and unsupervised learning techniques adopted.•Unsupervised learning algorithms for anomaly detection.•CBM approach in an unsupervised fashion adopting minimal feedback. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
An Adaptive Update-Based Cache Coherence Protocol for Reduction of Miss Rate and Traffic Although directory-based write-invalidate cache coherence protocols have a potential to improve the performance of large-scale multiprocessors, coherence misses limit the processor utilization. Therefore, so called competi- tive-update protocols — hybrid protocols between write-invalidate and write- update — have been considered as a means to reduce the coherence miss rate and have been shown to be a better coherence policy for a wide range of applica- tions. Unfortunately such protocols may cause high traffic peeks for applications with extensive use of migratory objects. These traffic peeks can offset the per- formance gain of a reduced miss rate if the network bandwidth is not sufficient. We propose in this study to extend a competitive-update protocol with a pre- viously published adaptive mechanism that can dynamically detect migratory objects and reduce the coherence traffic they cause. Detailed architectural simu- lations based on five scientific and engineering applications show that this adap- tive protocol can outperform a write-invalidate protocol by reducing the miss rate and bandwidth need by as much as 71% and 26%, respectively. | Adaptive cache coherence over a high bandwidth broadband mesh network Networks have traditionally been an obstacle to high performance distributed computing. Specific problems are insufficient bandwidth and long transaction latencies. While pipelining data can achieve high bandwidth, it does nothing for latency which is still a bottleneck in performance. One approach is to develop a cache coherence protocol which exploits recurring data sharing patterns to reduce the impact of latency. This paper proposes an adaptive cache coherence protocol which detects producer–consumer type sharing and maintains coherence on only those cache blocks which exhibit producer–consumer sharing via updates rather than invalidates. Execution driven simulations of this protocol show improved performance compared to a standard write-invalidate protocol protocol and a competitive update protocol. When there are no access patterns to exploit, the protocol does not degrade performance. When there is producer–consumer type sharing, the proposed protocol runs benchmarks up to 30% faster than the better of either write-invalidate or competitive update. As a side-effect, it shows improved tolerance of increasing network latency. | Combining compile-time and run-time support for efficient software distributed shared memory We describe an integrated compile time and run time system for efficient shared memory parallel computing on distributed memory machines. The combined system presents the user with a shared memory programming model. The run time system implements a consistent shared memory abstraction using memory access detection and automatic data caching. The compiler improves the efficiency of the shared memor... | Effectiveness of dynamic prefetching in multiple-writer distributed virtual shared-memory systems We consider a network of workstations (NOW) organization consisting of bus- based multiprocessors interconnected by an ATM interconnect on which a shared- memory programming model is imposed by using a multiple-write r distributed virtual shared memory system. The latencies associated with bringing data into the local memory are a severe performance limitation of such systems. To tolerate the access latencies, we propose a novel prefetch approach and show how it can be integrated into the software-based coherence layer of a multi- ple-writer protocol. This approach uses the access history of each page to guide which pages to prefetch. Based on detailed architectural simulations and seven scientific applications we f ind that our prefetch algorithm can remove a vast majority of the remote operations which improves the performance of all applica- tions. We also find that the bandwidth provided by ATM switches available today is sufficient to accommodate prefetching. However, the protocol processing over- head of available ATM interfaces limits the gain of the prefetching algorithms. | Adaptive protocols for software distributed shared memory We demonstrate the benefits of software shared memory protocols that adapt at run time to the memory access patterns observed in the applications. This adaptation is automatic-no user annotations are required-and does not rely on compiler support or special hardware. We investigate adaptation between singleand multiple-writer protocols, dynamic aggregation of pages into a larger transfer unit, and adaptation between invalidate and update. Our results indicate that adaptation between single- and multiple-writer and dynamic page aggregation are clearly beneficial. The results for the adaptation between invalidate and update are less compelling, showing at best gains similar to the dynamic aggregation adaptation and at worst serious performance deterioration | A Comparison of Two Strategies of Dynamic Data Prefetching in Software DSM A major overhead of software DSM is the long remote access latency when the accessed page is not in the local cache. One method for tolerating the remote access latency isto prefetch the pages before they are accessed. This paper compares two methods of dynamic data prefetching-history prefetching, which utilizes the temporal locality of theprogram to prefetch, and aggregate prefetching, which utilizes the spatial locality of the program to prefetch-on the JIAJIA software DSM. Experiments with eight well-acceptedbenchmarks and a real application show that both can dramatically reduce the number of remote page faults and the number of messages exchanged. All applications benefit fromthe prefetching in overall running time, and four achieve a performance improvement of 10%-20%. We then analyze the advantages and disadvantages of the two prefetchingstrategies. We find that aggregate prefetching may be more efficient than history prefetching for most applications in software DSM systems. | Regeneration of replicated objects: a technique and its Eden implementation A replicated directory system based on a method called regeneration is designed and implemented. The directory system allows selection of arbitrary object to be replicated, choice of the number of replicas for each object, and placement of the copies on machines with independent failure modes. Copies can become inaccessible due to node crashes, but as long as a single copy survives, the replication level is restored by automatically replacing lost copies on other active machines. The focus is on a regeneration algorithm for replica replacement and its application to a replicated directory structure in the Eden local area network. A simple probabilistic approach is used to compare the availability provided by the algorithm to three other replication techniques. | Clotho: decoupling memory page layout from storage organization As database application performance depends on the utilization of the memory hierarchy, smart data placement plays a central role in increasing locality and in improving memory utilization. Existing techniques, however, do not optimize accesses to all levels of the memory hierarchy and for all the different workloads, because each storage level uses different technology (cache, memory, disks) and each application accesses data using different patterns. Clotho is a new buffer pool and storage management architecture that decouples in-memory page layout from data organization on non-volatile storage devices to enable independent data layout design at each level of the storage hierarchy. Clotho can maximize cache and memory utilization by (a) transparently using appropriate data layouts in memory and non-volatile storage, and (b) dynamically synthesizing data pages to follow application access patterns at each level as needed. Clotho creates in-memory pages individually tailored for compound and dynamically changing workloads, and enables efficient use of different storage technologies (e.g., disk arrays or MEMS-based storage devices). This paper describes the Clotho design and prototype implementation and evaluates its performance under a variety of workloads using both disk arrays and simulated MEMS-based storage devices. | Reordering Query Execution in Tertiary Memory Databases In the relational model the order of fetching data does not affect query correctness. This flexibility is exploited in query optimization by statically reordering data accesses. However, once a query is optimized, it is executed in a fixed order in most systems, with the result that data requests are made in a fixed order. Only limited forms of runtime reordering can be provided by low-level device managers. More aggressive reordering strategies are essential in scenarios where the latency of access to data objects varies widely and dynamically, as in tertiary devices. This paper presents such a strategy. Our key innovation is to exploit dynamic reordering to match execution order to the optimal data fetch order, in all parts of the plan-tree. To demonstrate: the practicality of our approach and the impact of our optimizations, we report on a prototype implementation based on Postgres. Using our system, typical I/O cost for queries on tertiary memory databases is as much as an order of magnitude smaller than with conventional query processing techniques. | DULO: an effective buffer cache management scheme to exploit both temporal and spatial locality Sequentiality of requested blocks on disks, or their spatial locality, is critical to the performance of disks, where the throughput of accesses to sequentially placed disk blocks can be an order of magnitude higher than that of accesses to randomly placed blocks. Unfortunately, spatial locality of cached blocks is largely ignored and only temporal locality is considered in system buffer cache management. Thus, disk performance for workloads without dominant sequential accesses can be seriously degraded. To address this problem, we propose a scheme called DULO (DUal LOcality), which exploits both temporal and spatial locality in buffer cache management. Leveraging the filtering effect of the buffer cache, DULO can influence the I/O request stream by making the requests passed to disk more sequential, significantly increasing the effectiveness of I/O scheduling and prefetching for disk performance improvements. DULO has been extensively evaluated by both trace-driven simulations and a prototype implementation in Linux 2.6.11. In the simulations and system measurements, various application workloads have been tested, including Web Server, TPC benchmarks, and scientific programs. Our experiments show that DULO can significantly increase system throughput and reduce program execution times. | Actions with Indirect Effects (Preliminary Report) | Improving file system reliability with I/O shepherding We introduce a new reliability infrastructure for file systems called I/O shepherding. I/O shepherding allows a file system developer to craft nuanced reliability policies to detect and recover from a wide range of storage system failures. We incorporate shepherding into the Linux ext3 file system through a set of changes to the consistency management subsystem, layout engine, disk scheduler, and buffer cache. The resulting file system, CrookFS, enables a broad class of policies to be easily and correctly specified. We implement numerous policies, incorporating data protection techniques such as retry, parity, mirrors, checksums, sanity checks, and data structure repairs; even complex policies can be implemented in less than 100 lines of code, confirming the power and simplicity of the shepherding framework. We also demonstrate that shepherding is properly integrated, adding less than 5% overhead to the I/O path. | Asymptotically optimal encodings of conformant planning in QBF The world is unpredictable, and acting intelligently requires anticipating possible consequences of actions that are taken. Assuming that the actions and the world are deterministic, planning can be represented in the classical propositional logic. Introducing nondeterminism (but not probabilities) or several initial states increases the complexity of the planning problem and requires the use of quantified Boolean formulae (QBF). The currently leading logic-based approaches to conditional planning use explicitly or implicitly a QBF with the prefix ∃∀∃. We present formalizations of the planning problem as QBF which have an asymptotically optimal linear size and the optimal number of quantifier alternations in the prefix: ∃∀ and ∀∃. This is in accordance with the fact that the planning problem (under the restriction to polynomial size plans) is on the second level of the polynomial hierarchy, not on the third. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.111184 | 0.101913 | 0.101913 | 0.053339 | 0.02667 | 0.00045 | 0.000094 | 0.000013 | 0.000001 | 0 | 0 | 0 | 0 | 0 |
Biologically Inspired Models to Train Neural Networks A major stumbling block to the successful implementation of neural networks for nonlinear regression models is overtraining. This paper presents two models to combat overtraining, based on the biological concept that the strength of the connection between neurons develops over time. For the problems investigated, the models produce smooth solutions with no sign of overtraining. For the practical Diminishing Returns problem, the Error Constraints Model, because of its mathematical formulation, determines the minimum number of hidden layer neurons. As a result, the present work is important because, if an acceptable level of error can be specified, then the Error Constraints Model can be used to determine the network architecture. Copies of the workbook implementations of all the models presented in this paper may be downloaded from the author's website. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | A Terminological Interpretation of (Abductive) Logic Programming The logic program formalism is commonly viewed as a modal or default logic. Inthis paper, we propose an alternative interpretation of the formalism as a terminologicallogic. A terminological logic is designed to represent two different forms of knowledge.A TBox represents definitions for a set of concepts. An ABox represents the assertionalknowledge of the expert. In our interpretation, a logic program is a TBox providingdefinitions for all predicates; this interpretation is present... | On methodology of representing knowledge in dynamic domains The main goal of this paper is to outline a methodology of programming in dynamic problem domains. The methodology is based on recent developments in theories of reasoning about action and change and in logic programming. The basic ideas of the approach are illustrated by discussion of the design of a program which verifies plans to control the reaction control system (RCS) of the Space Shuttle. We start with formalization of the RCS domain in an action description language. The resulting formalization ARCS together with a candidate plan α and a goal G are given as an input to a logic program. This program verifies if G would be true after executing α in the current situation. A high degree of trust in the program's correctness was achieved by (a) the simplicity and transparency of our formalization, ARCS, which made it possible for the users to informally verify its correctness; (b) a proof of correctness of the program with respect to ARCS. This is an ongoing work under a contract with the United Space Alliance—the company primarily responsible for operating the Space Shuttle. | Invariance, Maintenance, and Other Declarative Objectives of Triggers - A Formal Characterization of Active Databases In this paper we take steps towards a systematic design of active features in an active database. We propose having declarative specifications that specify the objective of an active database and formulate the correctness of triggers with respect to such specifications. In the process we distinguish between the notions of 'invariance' and 'maintenance' and propose four different classes of specification constraints. We also propose three different types of triggers with distinct purposes and show through the analysis of an example from the literature, the correspondence between these trigger types and the specification classes. Finally, we briefly introduce the notion of k-maintenance that is important from the perspective of a reactive (active database) system. | Reasoning about Policies using Logic Programs We use a simplied version of the Policy DescriptionLanguage PDL introduced in (Lobo, Bhatia, & Naqvi1999) to represent and reason about policies. In PDLa policy description is a collection of Event-ConditionAction-Rules that denes a mapping from event historiesinto action histories. In this paper we introduce thegeneration problem: nding an event history generatingan action history, and state its complexity. Becauseof its high complexity we present a logic programmingbased... | A monotonicity theorem for extended logic programs Because general and extended logic programs behave nonmonotonically, itis in general difficult to predict how even minor changes to such programswill affect their meanings. This paper shows that for a restricted class ofextended logic programs --- those with signings --- it is possible to state afairly general theorem comparing the entailments of programs. To this end,we generalize (to the class of extended logic programs) the definition of asigning, first formulated by Kunen for general ... | Logic programs with exceptions We extend logic programming to deal with default reasoning by allowing the explicit representation of exceptions in addition
to general rules. To formalise this extension, we modify the answer set semantics of Gelfond and Lifschitz, which allows both
classical negation and negation as failure.
We also propose a transformation which eliminates exceptions by using negation by failure. The transformed program can be
implemented by standard logic programming methods, such as SLDNF. The explicit representation of rules and exceptions has
the virtue of greater naturalness of expression. The transformed program, however, is easier to implement. | Answer set programming for collaborative housekeeping robotics: representation, reasoning, and execution Answer set programming (ASP) is a knowledge representation and reasoning paradigm with high-level expressive logic-based formalism, and efficient solvers; it is applied to solve hard problems in various domains, such as systems biology, wire routing, and space shuttle control. In this paper, we present an application of ASP to housekeeping robotics. We show how the following problems are addressed using computational methods/tools of ASP: (1) embedding commonsense knowledge automatically extracted from the commonsense knowledge base ConceptNet, into high-level representation, and (2) embedding (continuous) geometric reasoning and temporal reasoning about durations of actions, into (discrete) high-level reasoning. We introduce a planning and monitoring algorithm for safe execution of plans, so that robots can recover from plan failures due to collision with movable objects whose presence and location are not known in advance or due to heavy objects that cannot be lifted alone. Some of the recoveries require collaboration of robots. We illustrate the applicability of ASP on several housekeeping robotics problems, and report on the computational efficiency in terms of CPU time and memory. | Temporal reasoning in logic programming: a case for the situation calculus We propose, and axiomatize, an extended version of the situation calculus[12] for temporal reasoning in a logic programming framework. Thisextended language provides for a linear temporal structure, which may beviewed as a path of actual event occurrences within the tree of possible situationsof the "classical" situation calculus. The extended language providesfor events to occur and fluents to hold at specific points in time. As a result,it is possible to establish a close correspondence ... | Reasoning about action I: a possible worlds approach Reasoning about change is an important aspect of commonsense reasoning and planning.In this paper we describe an approach to reasoning about change for rich domains whereit is not possible to anticipate all situations that might occur. The approach provides asolution to the frame problem, and to the related problem that it is not always reasonable toexplicitly specify all of the consequences of actions. The approach involves keeping a singlemodel of the world that is updated when actions... | Provably Difficult Combinatorial Games | Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days. | Storage Management for Web Proxies Today, caching web proxies use general-purpose file systems to store web objects. Proxies, e.g., Squid or Apache, when running on a UNIX system, typically use the standard UNIX file system (UFS) for this purpose. UFS was designed for research and engineering environments, which have different characteristics from that of a caching web proxy. Some of the differences are high temporal locality, relaxed persistence requirements, and a different read/write ratio. In this paper, we characterize the web proxy workload, describe the design of Hummingbird, a light-weight file system for web proxies, and present performance measurements of Hummingbird. Hummingbird has two distinguishing features: it separates object naming and storage locality through direct application-provided hints, and its clients are compiled with a linked library interface for memory sharing. When we simulated the Squid proxy, Hummingbird achieves document request throughput 2.3-9.4 times larger than with several different versions of UFS. Our experimental results are verified within the Polygraph proxy benchmarking environment. | Anatomical Structure Sketcher For Cephalograms By Bimodal Deep Learning The lateral cephalogram is a commonly used medium to acquire patient-specific morphology for diagnose and treatment planning in clinical dentistry. The robust anatomical structure detection and accurate annotation remain challenging considering the personal skeletal variations and image blurs caused by device-specific projection magnification, together with structure overlapping in the lateral cephalograms. We propose a novel cephalogram sketcher system, where the contour extraction of anatomical structures is formulated as a cross-modal morphology transfer from regular image patches to arbitrary curves. Specifically, the image patches of structures of interest are located by a hierarchical pictorial model. The automatic contour sketcher converts the image patch to a morphable boundary curve via a bimodal deep Boltzmann machine. The deep machine learns a joint representation of patch textures and contours, and forms a path from one modality (patches) to the other (contours). Thus, the sketcher can infer the contours by alternating Gibbs sampling along the path in a manner similar to the data completion. The proposed method is robust not only to structure detection, but also tends to produce accurate structure shapes and landmarks even in blurry X-ray images. The experiments performed on clinically captured cephalograms demonstrate the effectiveness of our method. | 1.002387 | 0.007407 | 0.003855 | 0.003704 | 0.002222 | 0.002014 | 0.001604 | 0.001111 | 0.000478 | 0.000093 | 0.000002 | 0 | 0 | 0 |
Searching Powerset Automata by Combining Explicit-State and Symbolic Model Checking The ability to analyze a digital system under conditions of uncertainty is important in several application domains. The problem is naturally described in terms of search in the powerset of the automaton representing the system. However, the associated exponential blowup prevents the application of traditional model checking techniques. This work describes a new approach to searching powerset automata, which does not require the explicit powerset construction. We present an efficient representation of the search space based on the combination of symbolic and explicit-state model checking techniques. We describe several search algorithms, based on two different, complementary search paradigms, and we experimentally evaluate the approach. | Open World Planning in the Situation Calculus We describe a forward reasoning planner for openworlds that uses domain specific information for pruningits search space, as suggested by (Bacchus & Kabanza1996; 2000). The planner is written in the situationcalculus-based programming language GOLOG,and it uses a situation calculus axiomatization of theapplication domain. Given a sentence oe to prove, theplanner regresses it to an equivalent sentence oe 0 aboutthe initial situation, then invokes a theorem prover todetermine... | Heuristic search + symbolic model checking = efficient conformant planning We consider the problem of how an agent creates a discrete spatial representation from its continuous interactions with the environment. Such representation will be the minimal one that explains the experiences of the agent in the environment. In this ... | Conformant planning via symbolic model checking We tackle the problem of planning in nondeterministic domains, by presenting a new approach to conformant planning. Conformant planning is the problem of finding a sequence of actions that is guaranteed to achieve the goal despite the nondeterminism of the domain. Our approach is based on the representation of the planning domain as a finite state automaton. We use Symbolic Model Checking techniques, in particular Binary Decision Diagrams, to compactly represent and efficiently search the automaton. In this paper we make the following contributions. First, we present a general planning algorithm for conformant planning, which applies to fully nondeterministic domains, with uncertainty in the initial condition and in action effects. The algorithm is based on a breadth-first, backward search, and returns conformant plans of minimal length, if a solution to the planning problem exists, otherwise it terminates concluding that the problem admits no conformant solution. Second, we provide a symbolic representation of the search space based on Binary Decision Diagrams (BDDs), which is the basis for search techniques derived from symbolic model checking. The symbolic representation makes it possible to analyze potentially large sets of states and transitions in a single computation step, thus providing for an efficient implementation. Third, we present CMBP (Conformant Model Based Planner), an efficient implementation of the data structures and algorithm described above, directly based on BDD manipulations, which allows for a compact representation of the search layers and an efficient implementation of the search steps. Finally, we present an experimental comparison of our approach with the state-of-the-art conformant planners CGP, QBFPLAN and GPT. Our analysis includes all the planning problems from the distribution packages of these systems, plus other problems defined to stress a number of specific factors. Our approach appears to be the most effective: CMBP is strictly more expressive than QBFPLAN and CGP and, in all the problems where a comparison is possible, CMBP outperforms its competitors, sometimes by orders of magnitude. | Representing action: indeterminacy and ramifications We define and study a high-level language for describing actions, moreexpressive than the action language A introduced by Gelfond and Lifschitz.The new language, AR, allows us to describe actions withindirect effects (ramifications), nondeterministic actions, and actionsthat may be impossible to execute. It has symbols for nonpropositionalfluents and for the fluents that are exempt from the commonsense lawof inertia. Temporal projection problems specified using the languageAR can be... | Weak, strong, and strong cyclic planning via symbolic model checking Planning in nondeterministic domains yields both conceptual and practical difficulties. From the conceptual point of view, different notions of planning problems can be devised: for instance, a plan might either guarantee goal achievement, or just have some chances of success. From the practical point of view, the problem is to devise algorithms that can effectively deal with large state spaces. In this paper, we tackle planning in nondeterministic domains by addressing conceptual and practical problems. We formally characterize different planning problems, where solutions have a chance of success ("weak planning"), are guaranteed to achieve the goal ("strong planning"), or achieve; the goal with iterative trial-and-error strategies ("strong cyclic planning"). In strong cyclic planning, all the executions associated with the solution plan always have a possibility of terminating and, when they do, they are guaranteed to achieve the goal. We present planning algorithms for these problem classes, and prove that they are correct and complete. We implement the algorithms in the MBP planner by using symbolic model checking techniques. We show that our approach is practical with an extensive experimental evaluation: MBP compares positively with state-of-the-art planners, both in terms of expressiveness and in terms of performance. | Complexity of Planning with Partial Observability We show that for conditional planning with partial observ- ability the existence problem of plans with success proba- bility 1 is 2-EXP-complete. This result completes the com- plexity picture for non-probabilistic propositional planning. We also give new more direct and informative proofs for the EXP-hardness of conditional planning with full observability and the EXPSPACE-hardness of conditional planning with- out observability. The proofs demonstrate how lack of full observability allows the encoding of exponential space Tur- ing machines in the planning problem, and how the neces- sity to have branching in plans corresponds to the move to a complexity class defined in terms of alternation from the cor- responding deterministic complexity class. Lack of full ob- servability necessitates the use of beliefs states, the number of which is exponential in the number of states, and alternation corresponds to the choices a branching plan can make. | Probabilistic Planning with Information Gathering and Contingent Execution Most AI representations and algorithms for plan generationhave not included the concept of informationproducingactions (also called diagnostics, or tests,in the decision making literature). We present aplanning representation and algorithm that modelsinformation-producing actions and constructs plansthat exploit the information produced by those actions.We extend the buridan (Kushmerick et al.1994) probabilistic planning algorithm, adapting theaction representation to model the... | On the complexity of domain-independent planning In this paper, we examine how the complexity of domain-independent planning with STRIPS-style operators depends on the nature of the planning operators. We show how the time complexity varies depending on a wide variety of conditions: • whether or not delete lists are allowed; • whether or not negative preconditions are allowed; • whether or not the predicates are restricted to be propositions (i.e., 0-ary); • whether the planning operators are given as part of the input to the planning problem, or instead are fixed in advance. | On the complexity of planning for agent teams and its implications for single agent planning If the complexity of planning for a single agent is described by some function f of the input, how much more difficult is it to plan for a team of n cooperating agents? If these agents are completely independent, we can simply solve n single agent problems, scaling linearly with the number of agents. But if all the agents interact tightly, we really need to solve a single problem that is n times larger, which could be exponentially (in n) harder to solve. Is a more general characterization possible? To formulate this question precisely, we minimally extend the standard STRIPS model to describe multi-agent planning problems. Then, we identify two problem parameters that help us answer our question. The first parameter is independent of the precise task the multi-agent system should plan for, and it captures the structure of the possible direct interactions between the agents via the tree-width of a graph induced by the team. The second parameter is task-dependent, and it captures the minimal number of interactions by the ''most interacting'' agent in the team that is needed to solve the problem. We show that multi-agent planning problems can be solved in time exponential only in these parameters. Thus, when these parameters are bounded, the complexity scales only polynomially in the size of the agent team. These results also have direct implications for the single-agent case: by casting single-agent planning tasks as multi-agent planning tasks, we can devise novel methods for decomposition-based planning for single agents. We analyze one such method, and use the techniques developed to provide some of the strongest tractability results for classical single-agent planning to date. | Reasoning about actions with sensing under qualitative and probabilistic uncertainty We focus on the aspect of sensing in reasoning about actions under qualitative and probabilistic uncertainty. We first define the action language E for reasoning about actions with sensing, which has a semantics based on the autoepistemic description logic ALCKNF, and which is given a formal semantics via a system of deterministic transitions between epistemic states. As an important feature, the main computational tasks in E can be done in linear and quadratic time. We then introduce the action language E+ for reasoning about actions with sensing under qualitative and probabilistic uncertainty, which is an extension of E by actions with nondeterministic and probabilistic effects, and which is given a formal semantics in a system of deterministic, nondeterministic, and probabilistic transitions between epistemic states. We also define the notion of a belief graph, which represents the belief state of an agent after a sequence of deterministic, nondeterministic, and probabilistic actions, and which compactly represents a set of unnormalized probability distributions. Using belief graphs, we then introduce the notion of a conditional plan and its goodness for reasoning about actions under qualitative and probabilistic uncertainty. We formulate the problems of optimal and threshold conditional planning under qualitative and probabilistic uncertainty, and show that they are both uncomputable in general. We then give two algorithms for conditional planning in our framework. The first one is always sound, and it is also complete for the special case in which the relevant transitions between epistemic states are cycle-free. The second algorithm is a sound and complete solution to the problem of finite-horizon conditional planning in our framework. Under suitable assumptions, it computes every optimal finite-horizon conditional plan in polynomial time. We also describe an application of our formalism in a robotic-soccer scenario, which underlines its usefulness in realistic applications. | More accurate tests for the statistical significance of result differences Statistical significance testing of differences in values of metrics like recall, precision and balanced F-score is a necessary part of empirical natural language processing. Unfortunately, we find in a set of experiments that many commonly used tests often underestimate the significance and so are less likely to detect differences that exist between different techniques. This underestimation comes from an independence assumption that is often violated. We point out some useful tests that do not make this assumption, including computationally-intensive randomization tests. | Total-order multi-agent task-network planning for contract bridge This paper describes the results of applying a modified version of hierarchical task-network (HTN) planning to the problem of declarer play in contract bridge. We represent information about bridge in a task network that is extended to represent multi-agency and uncertainty. Our game-playing procedure uses this task network to generate game trees in which the set of alternative choices is determined not by the set of possible actions, but by the set of available tactical and strategic schemes. This approach avoids the difficulties that traditional game-tree search techniques have with imperfect-information games such as bridge--but it also differs in several significant ways from the planning techniques used in typical HTN planners. We describe why these modifications were needed in order to build a successful planner for bridge. This same modified HTN planning strategy appears to be useful in a variety of application domains--for example, we have used the same planning techniques in a process-planning system for the manufacture of complex electro-mechanical devices (Hebbar et al. 1996). We discuss why the same technique has been successful in two such diverse domains. | Improving Citation Polarity Classification With Product Reviews Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification. | 1.077411 | 0.014787 | 0.013923 | 0.008753 | 0.005862 | 0.001707 | 0.000257 | 0.000119 | 0.000057 | 0.000016 | 0 | 0 | 0 | 0 |
Filling the Gaps: Improving Wikipedia Stubs The availability of only a limited number of contributors on Wikipedia cannot ensure consistent growth and improvement of the online encyclopedia. With information being scattered on the web, our goal is to automate the process of generation of content for Wikipedia. In this work, we propose a technique of improving stubs on Wikipedia that do not contain comprehensive information. A classifier learns features from the existing comprehensive articles on Wikipedia and recommends content that can be added to the stubs to improve the completeness of such stubs. We conduct experiments using several classifiers - Latent Dirichlet Allocation (LDA) based model, a deep learning based architecture (Deep belief network) and TFIDF based classifier. Our experiments reveal that the LDA based model outperforms the other models (~6% F-score). Our generation approach shows that this technique is capable of generating comprehensive articles. ROUGE-2 scores of the articles generated by our system outperform the articles generated using the baseline. Content generated by our system has been appended to several stubs and successfully retained in Wikipedia. | Wikikreator: Improving Wikipedia Stubs Automatically Stubs on Wikipedia often lack comprehensive information. The huge cost of editing Wikipedia and the presence of only a limited number of active contributors curb the consistent growth of Wikipedia. In this work, we present WikiKreator, a system that is capable of generating content automatically to improve existing stubs on Wikipedia. The system has two components. First, a text classifier built using topic distribution vectors is used to assign content from the web to various sections on a Wikipedia article. Second, we propose a novel abstractive summarization technique based on an optimization framework that generates section-specific summaries for Wikipedia stubs. Experiments show that WikiKreator is capable of generating well-formed informative content. Further, automatically generated content from our system have been appended to Wikipedia stubs and the content has been retained successfully proving the effectiveness of our approach. | Playscript Classification and Automatic Wikipedia Play Articles Generation In this work, we aim to create Wikipedia pages on plays automatically by extracting relevant information from various web sources. Our approach involves building an efficient classifier that can classify web documents as play scripts. From the set of correctly classified instances of play scripts, we extract relevant play-related information from the documents and use it to obtain additional information from various sources on the web. This information is aggregated and human-readable Wikipedia pages are created using a bot. The results of our experiments show that classifiers trained by combining our designed features along with \"bag-of-words\" (bow) features outperform classifiers trained using only bow features. Our approach further shows that good quality human-readable pages can be created using our bot. Such automatic page generation process can eventually ensure a more complete Wikipedia. | Extended stable semantics for normal and disjunctive programs | The nature of statistical learning theory~. First Page of the Article | A machine program for theorem-proving The programming of a proof procedure is discussed in connection with trial runs and possible improvements. | An Introduction to Least Commitment Planning Recent developments have clarified the process of generating partially ordered, partially specified sequences of actions whose execution will achieve an agent's goal. This article summarizes a progression of least commitment planners, starting with one that handles the simple STRIPS representation and ending with UCOPOP a planner that manages actions with disjunctive precondition, conditional effects, and universal quantification over dynamic universes. Along the way, I explain how Chapman's formulation of the modal truth criterion is misleading and why his NP-completeness result for reasoning about plans with conditional effects does not apply to UCOPOP. | Equilibria and steering laws for planar formations This paper presents a Lie group setting for the problem of control of formations, as a natural outcome of the analysis of a planar two-vehicle formation control law. The vehicle trajectories are described using the planar Frenet–Serret equations of motion, which capture the evolution of both the vehicle position and orientation for unit-speed motion subject to curvature (steering) control. The set of all possible (relative) equilibria for arbitrary G-invariant curvature controls is described (where G=SE(2) is a symmetry group for the control law), and a global convergence result for the two-vehicle control law is proved. An n-vehicle generalization of the two-vehicle control law is also presented, and the corresponding (relative) equilibria for the n-vehicle problem are characterized. Work is on-going to discover stability and convergence results for the n-vehicle problem. | Pushing the envelope: planning, propositional logic, and stochastic search Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning problems many times faster than the best current planning systems. Although stochastic methods have been shown to be very effective on a wide range of scheduling problems, this is the first demonstration of its power on truly challenging classical planning instances. This work also provides a new perspective on representational issues in planning. | A Completeness Result for SLDNF-Resolution Because of the possibility of floundering and infinite derivations, SLDNF-resolution is, in general, not complete. The classical approach [17] to get a completeness result is to restrict the attention to normal programs P and normal goals G, such that P or {G} is allowed and P is hierarchical. Unfortunately, the class of all normal programs and all normal goals meeting these requirements is not powerful enough to be of great practical importance. But after refining the concept of allowedness by taking modes [12] into account, we can broaden the notion of a hierarchical program, and thereby define a subclass of the class of normal programs and normal goals which is powerful enough to compute all primitive recursive functions without losing the completeness of SLDNF-resolution. | Diagnostic reasoning with A-Prolog In this paper, we suggest an architecture for a software agent which operates a physical device and is capable of making observations and of testing and repairing the device's components. We present simplified definitions of the notions of symptom, candidate diagnosis, and diagnosis which are based on the theory of action language ${\cal AL}$. The definitions allow one to give a simple account of the agent's behavior in which many of the agent's tasks are reduced to computing stable models of logic programs. | ARIMA time series modeling and forecasting for adaptive I/O prefetching Bursty application I/O patterns, together with transfer limited storage devices, combine to create a major I/O bottleneck on parallel systems. This paper explores the use of time series models to forecast application I/O request times, then prefetching I/O requests during computation intervals to hide I/O latency. Experimental results with I/O intensive scientific codes show performance improvements compared to standard UNIX prefetching strategies. | Scheduling parallel I/O operations The I/O bottleneck in parallel computer systems has recently begun receiving increasing interest. Most attention has focused on improving the performance of I/O devices using fairly low-level parallelism in techniques such as disk striping and interleaving. Widely applicable solutions, however, will require an integrated approach which addresses the problem at multiple system levels, including applications, systems software, and architecture. We propose that within the context of such an integrated approach, scheduling parallel I/O operations will become increasingly attractive and can potentially provide substantial performance benefits.We describe a simple I/O scheduling problem and present approximate algorithms for its solution. The costs of using these algorithms in terms of execution time, and the benefits in terms of reduced time to complete a batch of I/O operations, are compared with the situations in which no scheduling is used, and in which an optimal scheduling algorithm is used. The comparison is performed both theoretically and experimentally. We have found that, in exchange for a small execution time overhead, the approximate scheduling algorithms can provide substantial improvements in I/O completion times. | Improving Citation Polarity Classification With Product Reviews Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification. | 1.2 | 0.1 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A Deep Neural Network based approach for vocal extraction from songs Songs and media files on the internet have grown at an unprecedented rate. Music has been one of the primary sources of entertainment for mankind for a long time now. With such large amounts of media available to us, it has become possible to use this to our advantage to solve problems which have been considered difficult to solve traditionally. One such problem is the separation of vocals and instrumental part from a song. This problem has largely remain unsolved despite a lot of work having been done on it, largely due to the difficulty in separating these two components of a song due to the high correlation and coherence between the two. In this paper we present a Deep Neural Network based approach to approach the problem and demonstrate how it shows a lot of promise for several type of songs and outperforms the existing techniques for most songs. Several Neural network architectures are experimented with and a detailed comparison between the results obtained from the various architectures are discussed in this paper. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A Preliminary Study Of The Suitability Of Deep Learning To Improve Lidar-Derived Biomass Estimation Light Detection and Ranging (LiDAR) is a remote sensor able to extract three-dimensional information about forest structure. Biophysical models have taken advantage of the use of LiDAR-derived information to improve their accuracy. Multiple Linear Regression (MLR) is the most common method in the literature regarding biomass estimation to define the relation between the set of field measurements and the statistics extracted from a LiDAR flight. Unfortunately, there exist open issues regarding the generalization of models from one area to another due to the lack of knowledge about noise distribution, relationship between statistical features and risk of overfitting. Autoencoders (a type of deep neural network) has been applied to improve the results of machine learning techniques in recent times by undoing possible data corruption process and improving feature selection. This paper presents a preliminary comparison between the use of MLR with and without preprocessing by autoencoders on real LiDAR data from two areas in the province of Lugo (Galizia, Spain). The results show that autoencoders statistically increased the quality of MLR estimations by around 15-30%. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Behavioral-level synthesis of heterogeneous BISR reconfigurable ASIC's In this paper, behavioral-level synthesis techniques are presented for the design of reconfigurable hardware. The techniques are applicable for synthesis of several classes of designs, including 1) design for fault tolerance against permanent faults, 2) design for improved manufacturability, and 3) design of application specific programmable processors (ASPP's)--processors designed to perform any computation from a specified set on a single implementation platform. This paper focuses on design techniques for efficient built-in self-repair (BISR), and thus directly addresses the former two applications. Previous BISR techniques have been based on replacing a failed module with a backup of the same type. We present new heterogeneous BISR methodologies which remove this constraint and enable replacement of a module with a spare of a different type. The approach is based on the flexibility of behavioral-level synthesis to explore the design space. Two behavioral synthesis techniques are developed; the first method is through assignment and scheduling, and the second utilizes transformations. Experimental results verify the effectiveness of the approaches. | A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle. | Incremental recovery in main memory database systems Recovery activities, like checkpointing and restart, in traditional database management systems are performed in a quiescent state where no transactions are active. This approach impairs the performance of online transaction processing systems, especially when a large volatile memory is used. An incremental scheme for performing recovery in main memory database systems (MMDBs), in parallel with transaction execution, is presented. A page-based incremental restart algorithm that enables the resumption of transaction processing as soon as the system is up is proposed. Pages are recovered individually and according to the demands of the post-crash transactions. A method for propagating updates from main memory to the backup database on disk is also provided. The emphasis is on decoupling the I/O activities related to the propagation to disk from the forward transaction execution in memory. The authors also construct a high-level recovery manager based on operation logging on top of the page-based algorithms. The proposed algorithms are motivated by the characteristics of large MMDBs, and exploit the technology of nonvolatile RAM. | Microprocessor technology trends The rapid pace of advancement of microprocessor technology has shown no sign of diminishing, and this pace is expected to continue in the future. Recent trends in such areas as silicon technology, processor architecture and implementation, system organization, buses, higher levels of integration, self-testing, caches, coprocessors, and fault tolerance are discussed, and expectations for further ad... | Multi-Join Optimization for Symmetric Multiprocessors | A combined method for maintaining large indices in multiprocessor multidisk environments Consider the problem of maintaining large indices (or secondary memory indices) in a multiprocessor multidisk environment in which each processor has a dedicated secondary memory (one disk or more). The processors either reside in the same site and communicate via shared memory, or reside in different sites and communicate via a local broadcast network. The straightforward method (SFM) for maintaining such an index, which is commonly called declustering, is to partition the index records equally among the processors, each of which maintains its part of the index in a local B/sup tree. In prior work (Inform. Processing Lett., vol. 34, pp. 313-321, May 1990), we have presented another method, called the "totally distributed B/sup tree" (TDB) method, in which all processors together implement a "wide" B/sup tree. There are settings in which the second method is better than the first method, and vice versa. In this paper, we present a new method, called the combined distribution method (CDM), that combines the ideas underlying SFM and TDB. In tightly coupled environments, CDM outperforms both SFM and TDB in almost all practical settings (in many settings by more than 30%). This is shown by an approximate analysis and verified by simulations. Note that CDM's approach can improve performance in database systems that use a RAID (redundant array of inexpensive disks). | The DASDBS Project: Objectives, Experiences, and Future Prospects A retrospective of the Darmstadt database system project, also known as DASDBS, is presented. The project is aimed at providing data management support for advanced applications, such as geo-scientific information systems and office automation. Similar to the dichotomy of RSS and RDS in System R, a layered architectural approach was pursued: a storage management kernel serves as the lowest common denominator of the requirements of the various applications classes, and a family of application-oriented front-ends provides semantically richer functions on top of the kernel. The lessons that were learned from building the DASDBS system are discussed. Particular emphasis is placed on the following issues: the role of nested relations, the experiences with using object buffers for coupling the system with the programming-language environment and the learning process in implementing multilevel transactions. | The K-D-B-tree: a search structure for large multidimensional dynamic indexes The problem of retrieving multikey records via range queries from a large, dynamic index is considered. By large it is meant that most of the index must be stored on secondary memory. By dynamic it is meant that insertions and deletions are intermixed with queries, so that the index cannot be built beforehand. A new data structure, the K-D-B-tree, is presented as a solution to this problem. K-D-B-trees combine properties of K-D-trees and B-trees. It is expected that the multidimensional search effieciency of balanced K-D-trees and the I/O efficiency of B-trees should both be approximated in the K-D-B-tree. Preliminary experimental results that tend to support this are reported. | Analytic Modeling and Comparisons of Striping Strategies for Replicated Disk Arrays Data replication has been widely used as a means of increasing the data availability for critical applications in the event of disk failure. There are different ways of organizing the two copies of the data across a disk array. This paper compares strategies for striping data of the two copies in the context of database applications. By keeping both copies active, we explore strategies that can take advantage of the additional copy to improve not only availability, but also performance during both normal and failure modes. We consider the effects of small and large stripe sizes on the performance of disk arrays with two active copies of data under a mixed workload of queries and transactions with a skewed access pattern. We propose a dual (hybrid) striping strategy which uses different stripe sizes for the two copies and a disk queuing policy designed to exploit this organization for optimal performance. An analytical model is devised for this scheme, by treating the individual disks as independent, and applying an M/G/1 queuing model. Disks on which a large query scan is running are modeled by a variation of the queue with permanent customers, which leads to an iterative functional equation for the query scan delay distribution. A solution for this equation is given. The results are validated against simulations and are shown to match well. Comparison with uniform striping strategies show that the dual striping scheme yields the most stable performance in a variety of workloads, out-performing the uniform striping strategy using either mirrored or chained declustering under both normal and failure mode operations. | Formal methods for the validation of automotive product configuration data Constraint-based reasoning is often used to represent and find solutions to configuration problems. In the field of constraint satisfaction, the major focus has been on finding solutions to difficult problems. However, many real-life configuration problems, ... | Where the really hard problems are It is well known that for many NP-complete problems, such as K-Sat, etc., typical cases are easy to solve; so that computationally hard cases must be rare (assuming P = NP). This paper shows that NP-complete problems can be summarized by at least one "order parameter", and that the hard problems occur at a critical value of such a parameter. This critical value separates two regions of characteristically different properties. For example, for K-colorability, the critical value separates overconstrained from underconstrained random graphs, and it marks the value at which the probability of a solution changes abruptly from near 0 to near 1. It is the high density of well-separated almost solutions (local minima) at this boundary that cause search algorithms to "thrash". This boundary is a type of phase transition and we show that it is preserved under mappings between problems. We show that for some P problems either there is no phase transition or it occurs for bounded N (and so bounds the cost). These results suggest a way of deciding if a problem is in P or NP and why they are different. | Proximal Methods for Hierarchical Sparse Coding Sparse coding consists in representing signals as sparse linear combinations of atoms selected from a dictionary. We consider an extension of this framework where the atoms are further assumed to be embedded in a tree. This is achieved using a recently introduced tree-structured sparse regularization norm, which has proven useful in several applications. This norm leads to regularized problems that are difficult to optimize, and in this paper, we propose efficient algorithms for solving them. More precisely, we show that the proximal operator associated with this norm is computable exactly via a dual approach that can be viewed as the composition of elementary proximal operators. Our procedure has a complexity linear, or close to linear, in the number of atoms, and allows the use of accelerated gradient techniques to solve the tree-structured sparse approximation problem at the same computational cost as traditional ones using the l1-norm. Our method is efficient and scales gracefully to millions of variables, which we illustrate in two types of applications: first, we consider fixed hierarchical dictionaries of wavelets to denoise natural images. Then, we apply our optimization tools in the context of dictionary learning, where learned dictionary elements naturally self-organize in a prespecified arborescent structure, leading to better performance in reconstruction of natural image patches. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models. | A Realistic Experiment in Knowledge Representation in Open Event Calculus: Protocol Specification This paper presents one of the first realistic experiments in the use of Event Calculusin Open Logic Programming: the specification of a process protocol. The specification taskinvolves most of the common complications of temporal reasoning: the representation of contextdependent actions, of preconditions and ramifications of actions, the modelling of systemfaults, and most of all, the representation of uncertainty of actions. As the underlying language,the Open Logic Programming... | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.2496 | 0.000274 | 0.000054 | 0.000054 | 0.00003 | 0.000018 | 0.000012 | 0.000002 | 0 | 0 | 0 | 0 | 0 | 0 |
Understanding disk failure rates: What does an MTTF of 1,000,000 hours mean to you? Component failure in large-scale IT installations is becoming an ever-larger problem as the number of components in a single cluster approaches a million. This article is an extension of our previous study on disk failures [Schroeder and Gibson 2007] and presents and analyzes field-gathered disk replacement data from a number of large production systems, including high-performance computing sites and internet services sites. More than 110,000 disks are covered by this data, some for an entire lifetime of five years. The data includes drives with SCSI and FC, as well as SATA interfaces. The mean time-to-failure (MTTF) of those drives, as specified in their datasheets, ranges from 1,000,000 to 1,500,000 hours, suggesting a nominal annual failure rate of at most 0.88%. We find that in the field, annual disk replacement rates typically exceed 1%, with 2--4% common and up to 13% observed on some systems. This suggests that field replacement is a fairly different process than one might predict based on datasheet MTTF. We also find evidence, based on records of disk replacements in the field, that failure rate is not constant with age, and that rather than a significant infant mortality effect, we see a significant early onset of wear-out degradation. In other words, the replacement rates in our data grew constantly with age, an effect often assumed not to set in until after a nominal lifetime of 5 years. Interestingly, we observe little difference in replacement rates between SCSI, FC, and SATA drives, potentially an indication that disk-independent factors such as operating conditions affect replacement rates more than component-specific ones. On the other hand, we see only one instance of a customer rejecting an entire population of disks as a bad batch, in this case because of media error rates, and this instance involved SATA disks. Time between replacement, a proxy for time between failure, is not well modeled by an exponential distribution and exhibits significant levels of correlation, including autocorrelation and long-range dependence. | On Variable Scope of Parity Protection in Disk Arrays In a common form of a RAID 5 architecture, data is organized on a disk array consisting of N + 1 disks into stripes of N data blocks and one parity block (with parity block locations staggered so as to balance the number of parity blocks on each disk). This allows data to be recovered in the event of a single disk failure. Here we consider an extension to this architecture in which parity information applies to arbitrary subsets of the data blocks in each stripe. Using several simplifying assumptions, we present simulation and analytic results that provide estimates of the improvement using this approach, in terms of total I/O operations, as compared to 1) conventional RAID 5 under a random single-block write workload, and 2) the use of a log-structured file system in which data is written out in stripes. Results on the reduction of disk recovery costs are also presented. | Uniform parity group distribution in disk arrays with multiple failures Several new disk arrays have recently been proposed in which the parity groupings are uniformly distributed throughout the array so that the extra workload created by a disk failure can be evenly shared by all the surviving disks, resulting in the best possible degraded mode performance. Many arrays now also put in multiple spare disks so that expensive service calls can be deferred. Furthermore, in a new sparing scheme called distributed sparing, the spare spaces are actually distributed throughout the array. This means after a rebuild the new array will be logically different from the original array. The authors present an algorithm for constructing and maintaining arrays with distributed sparing so that repeated uniform parity group distribution is achieved with each successive failure. | Multi-level RAID for very large disk arrays Very Large Disk Arrays - VLDAs have been developed to cope with the rapid increase in the volume of data generated requiring ultrareliable storage. Bricks or Storage Nodes - SNs holding a dozen or more disks are cost effective VLDA building blocks, since they cost less than traditional disk arrays. We utilize the Multilevel RAID - MRAID paradigm for protecting both SNs and their disks. Each SN is a k-disk-failure-tolerant kDFT array, while replication or l-node failure tolerance - lNFTs paradigm is applied at the SN level. For example, RAID1(M)/5(N) denotes a RAID1 at the higher level with a degree of replication M and each virtual disk is an SN configured as a RAID5 with N physical disks. We provide the data layout for RAID5/5 and RAID6/5 MRAIDs and give examples of updating data and recovering lost data. The former requires storage transactions to ensure the atomicity of storage updates. We discuss some weaknesses in reliability modeling in RAID5 and give examples of an asymptotic expansion method to compare the reliability of several MRAID organizations. We outline the reliability analysis of Markov chain models of VLDAs and briefly report on conclusions from simulation results. In Conclusions we outline areas for further research. | Using system-level models to evaluate I/O subsystem designs We describe a system-level simulation model and show that it enables accurate predictions of both I/O subsystem and overall system performance. In contrast, the conventional approach for evaluating the performance of an I/O subsystem design, which is based on standalone subsystem models, is often unable to accurately predict performance changes because it is too narrow in scope. In particular, conventional methodology treats all I/O requests equally, ignoring differences in how individual requests' response times affect system behavior (including both system performance and the subsequent I/O workload). We introduce the concept of request criticality to describe these feedback effects and show that real I/O workloads are not approximated well by either open or closed input models. Because conventional methodology ignores this fact, it often leads to inaccurate performance predictions and can thereby lead to incorrect conclusions and poor design choices. We illustrate these problems with real examples and show that a system-level model, which includes both the I/O subsystem and other important system components (e.g., CPUs and system software), properly captures the feedback and subsequent performance effects. | Parity logging disk arrays Parity-encoded redundant disk arrays provide highly reliable, cost-effective secondary storage with high performance for reads and large writes. Their performance on small writes, however, is much worse than mirrored disks—the traditional, highly reliable, but expensive organization for secondary storage. Unfortunately, small writes are a substantial portion of the I/O workload of many important, demanding applications such as on-line transaction processing. This paper presents parity logging, a novel solution to the small-write problem for redundant disk arrays. Parity logging applies journalling techniques to reduce substantially the cost of small writes. We provide detailed models of parity logging and competing schemes—mirroring, floating storage, and RAID level 5—and verify these models by simulation. Parity logging provides performance competitive with mirroring, but with capacity overhead close to the minimum offered by RAID level 5. Finally, parity logging can exploit data caching more effectively than all three alternative approaches. | WorkOut: I/O workload outsourcing for boosting RAID reconstruction performance User I/O intensity can significantly impact the performance of on-line RAID reconstruction due to contention for the shared disk bandwidth. Based on this observation, this paper proposes a novel scheme, called WorkOut (I/O Workload Outsourcing), to significantly boost RAID reconstruction performance. WorkOut effectively outsources all write requests and popular read requests originally targeted at the degraded RAID set to a surrogate RAID set during reconstruction. Our lightweight prototype implementation of WorkOut and extensive trace-driven and benchmark-driven experiments demonstrate that, compared with existing reconstruction approaches, WorkOut significantly speeds up both the total reconstruction time and the average user response time. Importantly, WorkOut is orthogonal to and can be easily incorporated into any existing reconstruction algorithms. Furthermore, it can be extended to improving the performance of other background support RAID tasks, such as re-synchronization and disk scrubbing. | Logging RAID - An Approach to Fast, Reliable, and Low-Cost Disk Arrays Parity-based disk arrays provide high reliability and high performance for read and large write accesses at low storage cost. However, small writes are notoriously slow due to the well-known read-modify-write problem. This paper presents logging RAID, a disk array architecture that adopts data logging techniques to overcome the small-write problem in parity-based disk arrays. Logging RAID achieves high performance for a wide variety of I/O access patterns with very small disk space overhead. We show this through trace-driven simulations. | Constant time permutation: an efficient block allocation strategy for variable-bit-rate continuous media data To provide high accessibility of continuous-media (CM) data, CM servers generally stripe data across multiple disks. Currently, the most widely used striping scheme for CM data is round-robin permutation (RRP). Unfortunately, when RRP is applied to variable-bit-rate (VBR) CM data, load imbalance across multiple disks occurs, thereby reducing overall system performance. In this paper, the performance of a VBR CM server with RRP is analyzed. In addition, we propose an efficient striping scheme called constant time permutation (CTP), which takes the VBR characteristic into account and obtains a more balanced load than RRP. Analytic models of both RRP and CTP are presented, and the models are verified via trace-driven simulations. Analysis and simulation results show that CTP can substantially increase the number of clients supported, though it might introduce a few seconds/minutes of initial delay. | A trace-driven comparison of algorithms for parallel prefetching and caching No abstract available. | On reasonable and forced goal orderings and their use in an agenda-driven planning algorithm The paper addresses the problem of computing goal orderings, which is one of the longstanding issues in AI planning. It makes two new contributions. First, it formally defines and discusses two different goal orderings, which are called the reasonable and the forced ordering. Both orderings are defined for simple STRIPS operators as well as for more complex ADL operators supporting negation and conditional effects. The complexity of these orderings is investigated and their practical relevance is discussed. Secondly, two different methods to compute reasonable goal orderings are developed. One of them is based on planning graphs, while the other investigates the set of actions directly. Finally, it is shown how the ordering relations, which have been derived for a given set of goals G, can be used to compute a so-called goal agenda that divides G into an ordered set of subgoals. Any planner can then, in principle, use the goal agenda to plan for increasing sets of subgoals. This can lead to an exponential complexity reduction, as the solution to a complex planning problem is found by solving easier subproblems. Since only a polynomial overhead is caused by the goal agenda computation, a potential exists to dramatically speed up planning algorithms as we demonstrate in the empirical evaluation, where we use this method in the IPP planner. | Relational algebra operations Without Abstract | On the relations between stable and well-founded semantics of logic programs We study the relations between stable and well-founded semantics of logic programs. 1. We show that stable semantics can be defined in the same way as well-founded semantics based on the basic notion of unfounded sets. Hence, stable semantics can be considered as “two-valued well-founded semantics”. 2. An axiomatic characterization of stable and well-founded semantics of logic programs is given by a new completion theory, called strong completion . Similar to the Clark's completion, the strong completion can be interpreted in either two-valued or three-valued logic. We show that ◦ Two-valued strong completion specifies the stable semantics. ◦ Three-valued strong completion specifies the well-founded semantics. 3. We study the equivalence between stable semantics and well-founded semantics. At first, we prove the equivalence between the two semantics for strict programs. Then we introduce the bottom-stratified and top-strict condition generalizing both the stratifiability and the strictness, and show that the new condition is sufficient for the equivalence between stable and well-founded semantics. Further, we show that the call-consistency condition is sufficient for the existence of at least one stable model. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.076524 | 0.072264 | 0.036302 | 0.036132 | 0.016136 | 0.008253 | 0.003611 | 0.000428 | 0.000051 | 0.00001 | 0 | 0 | 0 | 0 |
Support-Vector Networks The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. | Learning Control Knowledge for Forward Search Planning A number of today's state-of-the-art planners are based on forward state-space search. The impressive performance can be attributed to progress in computing domain independent heuristics that perform well across many domains. However, it is easy to find domains where such heuristics provide poor guidance, leading to planning failure. Motivated by such failures, the focus of this paper is to investigate mechanisms for learning domain-specific knowledge to better control forward search in a given domain. While there has been a large body of work on inductive learning of control knowledge for AI planning, there is a void of work aimed at forward-state-space search. One reason for this may be that it is challenging to specify a knowledge representation for compactly representing important concepts across a wide range of domains. One of the main contributions of this work is to introduce a novel feature space for representing such control knowledge. The key idea is to define features in terms of information computed via relaxed plan extraction, which has been a major source of success for non-learning planners. This gives a new way of leveraging relaxed planning techniques in the context of learning. Using this feature space, we describe three forms of control knowledge---reactive policies (decision list rules and measures of progress) and linear heuristics---and show how to learn them and incorporate them into forward state-space search. Our empirical results show that our approaches are able to surpass state-of-the-art non-learning planners across a wide range of planning competition domains. | Layer-wise analysis of deep networks with Gaussian kernels. | A survey of machine learning for big data processing. There is no doubt that big data are now rapidly expanding in all science and engineering domains. While the potential of these massive data is undoubtedly significant, fully making sense of them requires new ways of thinking and novel learning techniques to address the various challenges. In this paper, we present a literature survey of the latest advances in researches on machine learning for big data processing. First, we review the machine learning techniques and highlight some promising learning methods in recent studies, such as representation learning, deep learning, distributed and parallel learning, transfer learning, active learning, and kernel-based learning. Next, we focus on the analysis and discussions about the challenges and possible solutions of machine learning for big data. Following that, we investigate the close connections of machine learning with signal processing techniques for big data processing. Finally, we outline several open issues and research trends. | Nonlocal estimation of manifold structure. We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation invites an exploration of nonlocal manifold learning algorithms that attempt to discover shared structure in the tangent planes at different positions. A training criterion for such an algorithm is proposed, and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to generalize very far from training data (on learning handwritten character image rotations), where local nonparametric methods fail. | Self Supervised Boosting Boosting algorithms and successful applications thereof abound for clas- sification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a ran- dom field model by training them to improve classification performance between the data and an equal-sized sample of "negative examples" gen- erated from the model's current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative examples from the model's current Boltzmann distribution. Next, a fea- ture is trained to improve classification performance between data and negative examples. Finally, a coefficient is learned which determines the importance of this feature relative to ones already in the pool. Negative examples only need to be generated once to learn each new feature. The validity of the approach is demonstrated on binary digits and continuous synthetic data. | Deep Learning Advances in Computer Vision with 3D Data: A Survey. Deep learning has recently gained popularity achieving state-of-the-art performance in tasks involving text, sound, or image processing. Due to its outstanding performance, there have been efforts to apply it in more challenging scenarios, for example, 3D data processing. This article surveys methods applying deep learning on 3D data and provides a classification based on how they exploit them. From the results of the examined works, we conclude that systems employing 2D views of 3D data typically surpass voxel-based (3D) deep models, which however, can perform better with more layers and severe data augmentation. Therefore, larger-scale datasets and increased resolutions are required. | An Introduction to MCMC for Machine Learning This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons. | Some extensions of score matching Many probabilistic models are only defined up to a normalization constant. This makes maximum likelihood estimation of the model parameters very difficult. Typically, one then has to resort to Markov Chain Monte Carlo methods, or approximations of the normalization constant. Previously, a method called score matching was proposed for computationally efficient yet (locally) consistent estimation of such models. The basic form of score matching is valid, however, only for models which define a differentiable probability density function over R^n. Therefore, some extensions of the framework are proposed. First, a related method for binary variables is proposed. Second, it is shown how to estimate non-normalized models defined in the non-negative real domain, i.e. R"+^n. As a further result, it is shown that the score matching estimator can be obtained in closed form for some exponential families. | Big Data Deep Learning: Challenges and Perspectives Deep learning is currently an extremely active research area in machine learning and pattern recognition society. It has gained huge successes in a broad area of applications such as speech recognition, computer vision, and natural language processing. With the sheer size of data available today, big data brings big opportunities and transformative potential for various sectors; on the other hand, it also presents unprecedented challenges to harnessing data and information. As the data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. In this paper, we provide a brief overview of deep learning, and highlight current research efforts and the challenges to big data, as well as the future trends. | A 3D model recognition mechanism based on deep Boltzmann machines. The effectiveness of 3D model recognition generally depends on the feature representations and classification methods. Previous algorithms have not shown good capacities to detect 3D model׳s feature, thus, they seem not to be competent to recognize 3D model. Meanwhile, recent efforts have illustrated that Deep Boltzmann Machines (DBM) have great power to approximate the distributions of input data, and can archive state-of-the-arts results. In this paper, we propose a novel 3D model recognition mechanism based on DBM, which can be divided into two parts: one is feature detecting based on DBM, and the other is classification based on semi-supervised learning method. During the first part, the high-level abstraction representation can be obtained from a well-trained DBM, and the feature is used in semi-supervised classification method in the second part. The experiments are conducted on publicly available 3D model data sets: Princeton Shape Benchmark (PSB), SHREC׳09 and National Taiwan University (NTU). The proposed method is compared with several state-of-the-art methods in terms of several popular evaluation criteria, and the experimental results show better performance of the proposed model. | The LRU-K page replacement algorithm for database disk buffering This paper introduces a new approach to database disk buffering, called the LRU-K method. The basic idea of LRU-K is to keep track of the times of the last K references to popular database pages, using this information to statistically estimate the interarrival times of references on a page by page basis. Although the LRU-K approach performs optimal statistical inference under relatively standard assumptions, it is fairly simple and incurs little bookkeeping overhead. As we demonstrate with simulation experiments, the LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages. In fact, LRU-K can approach the behavior of buffering algorithms in which page sets with known access frequencies are manually assigned to different buffer pools of specifically tuned sizes. Unlike such customized buffering algorithms however, the LRU-K method is self-tuning, and does not rely on external hints about workload characteristics. Furthermore, the LRU-K algorithm adapts in real time to changing patterns of access. | Investigation of Different Seeding Strategies in a Genetic Planner Planning is a difficult and fundamental problem of AI. An alternative solution to traditional planning techniques is to apply Genetic Programming. As a program is similar to a plan a Genetic Planner can be constructed that evolves plans to the plan solution. One of the stages of the Genetic Programming algorithm is the initial population seeding stage. We present five alternatives to simple random selection based on simple search. We found that some of these strategies did improve the initial population, and the efficiency of the Genetic Planner over simple random selection of actions. | Editorial introduction to the Neural Networks special issue on Deep Learning of Representations. | 1.004155 | 0.007273 | 0.004408 | 0.004364 | 0.003719 | 0.003648 | 0.002182 | 0.001067 | 0.000373 | 0.000025 | 0 | 0 | 0 | 0 |
Zoned-RAID for multimedia database servers This paper proposes a novel fault-tolerant disk subsystem named Zoned-RAID (Z-RAID). Z-RAID improves the performance of traditional RAID system by utilizing the zoning property of modern disks which provides multiple zones with different data transfer rates in a disk. This study proposes to optimize data transfer rate of RAID system by constraining placement of data blocks in multi-zone disks. We apply Z-RAID for multimedia database servers such as video servers that require a high data transfer rate as well as fault tolerance. Our analytical and experimental results demonstrate the superiority of Z-RAID to conventional RAID. Z-RAID provides a higher effective data transfer rate in normal mode with no disadvantage. In the presence of a disk failure, Z-RAID still performs as well as RAID. | Modeling and Performance Comparison of Reliability Strategies for Distributed Video Servers Large scale video servers are typically based on disk arrays that comprise multiple nodes and many hard disks. Due to the large number of components, disk arrays are susceptible to disk and node failures that can affect the server reliability. Therefore, fault tolerance must be already addressed in the design of the video server. For fault tolerance, we consider parity-based as well as mirroring-based techniques with various distribution granularities of the redundant data. We identify several reliability schemes and compare them in terms of the server reliability and per stream cost. To compute the server reliability, we use continuous time Markov chains that are evaluated using the SHARPE software package. Our study covers independent disk failures and dependent component failures. We propose a new mirroring scheme called Grouped One-to-One scheme that achieves the highest reliability among all schemes considered. The results of this paper indicate that dividing the server into independent groups achieves the best compromise between the server reliability and the cost per stream. We further find that the smaller the group size, the better the trade-off between a high server reliability and a low per stream cost. | I/O issues in a multimedia system In future computer system design, I/O systems will have to support continuous media such as video and audio, whose system demands are different from those of data such as text. Multimedia computing requires us to focus on designing I/O systems that can handle real-time demands. Video- and audio-stream playback and teleconferencing are real-time applications with different I/O demands. We primarily consider playback applications which require guaranteed real-time I/O throughput. In a multimedia server, different service phases of a real-time request are disk, small computer systems interface (SCSI) bus, and processor scheduling. Additional service might be needed if the request must be satisfied across a local area network. We restrict ourselves to the support provided at the server, with special emphasis on two service phases: disk scheduling and SCSI bus contention. When requests have to be satisfied within deadlines, traditional real-time systems use scheduling algorithms such as earliest deadline first (EDF) and least slack time first. However, EDF makes the assumption that disks are preemptable, and the seek-time overheads of its strict real-time scheduling result in poor disk utilization. We can provide the constant data rate necessary for real-time requests in various ways that require trade-offs. We analyze how trade-offs that involve buffer space affect the performance of scheduling policies. We also show that deferred deadlines, which increase buffer requirements, improve system performance significantly.<> | An Adaptive High-Low Water Mark Destage Algorithm for Cached RAID5 The High-Low Water Mark destage (HLWM) algorithmis widely used to enable a cached RAID5to flush dirty datafrom its write cache to disks. It activates and deactivates adestaging process based on two time-invariant thresholdswhich are determined by cache occupancy levels. However, the opportunity exists to improve I/O throughput byadaptively changing the thresholds. This paper proposesan adaptive HLWM algorithm which dynamically changesits thresholds according to a varying I/O workload. Twothresholds are defined as the multiplication of changingrates of the cache occupancy level and the time requiredto fill and empty the cache. Performance evaluations with acached RAID5 simulator reveal that the proposed algorithmoutperforms the HLWM algorithm in terms of read responsetime, write cache hit ratio, and disk utilization. | File system aging—increasing the relevance of file system benchmarks Benchmarks are important because they provide a means for users and researchers to characterize how their workloads will perform on different systems and different system architectures. The field of file system design is no different from other areas of research in this regard, and a variety of file system benchmarks are in use, representing a wide range of the different user workloads that may be run on a file system. A realistic benchmark, however, is only one of the tools that is required in order to understand how a file system design will perform in the real world. The benchmark must also be executed on a realistic file system. While the simplest approach may be to measure the performance of an empty file system, this represents a state that is seldom encountered by real users. In order to study file systems in more representative conditions, we present a methodology for aging a test file system by replaying a workload similar to that experienced by a real file system over a period of many months, or even years. Our aging tools allow the same aging workload to be applied to multiple versions of the same file system, allowing scientific evaluation of the relative merits of competing file system designs.In addition to describing our aging tools, we demonstrate their use by applying them to evaluate two enhancements to the file layout policies of the UNIX fast file system. | On-line file caching Consider the following file caching problem: in response to a sequence of requests for files, where each file has a specified size and retrieval cost, maintain a cache of files of total size at most some specified k so as to minimize the total retrieval cost. Specifically, when a requested file is not in the cache, bring it into the cache, pay the retrieval cost, and choose files to remove from the cache so that the total size of files in the cache is at most k. This problem generalizes previous paging and caching problems by allowing objects of arbitrary size and cost, both important attributes when caching files for world-wide-web browsers, servers, and proxies. We give a simple deterministic on-line algorithm that generalizes many well-known paging and weighted-caching strategies, including least-recently-used, first-in-first-out, flush-when-full, and the balance algorithm. On any request sequence, the total cost incurred by the algorithm is at most k/(k-h+1) times the minimum possible using a cache of size h = k. For any algorithm satisfying the latter bound, we show it is also the case that for most choices of k, the retrieval cost is either insignificant or the competitive ratio is constant. This helps explain why competitive ratios of many on-line paging algorithms have been typically observed to be constant in practice. | The automatic improvement of locality in storage systems Disk I/O is increasingly the performance bottleneck in computer systems despite rapidly increasing disk data transfer rates. In this article, we propose Automatic Locality-Improving Storage (ALIS), an introspective storage system that automatically reorganizes selected disk blocks based on the dynamic reference stream to increase effective storage performance. ALIS is based on the observations that sequential data fetch is far more efficient than random access, that improving seek distances produces only marginal performance improvements, and that the increasingly powerful processors and large memories in storage systems have ample capacity to reorganize the data layout and redirect the accesses so as to take advantage of rapid sequential data transfer. Using trace-driven simulation with a large set of real workloads, we demonstrate that ALIS considerably outperforms prior techniques, improving the average read performance by up to 50% for server workloads and by about 15% for personal computer workloads. We also show that the performance improvement persists as disk technology evolves. Since disk performance in practice is increasing by only about 8% per year, the benefit of ALIS may correspond to as much as several years of technological progress. | Disk caching in large database and timeshared systems We present the results of a variety of trace-driven simulations of disk cache designs using traces from a variety of mainframe timesharing and database systems in production use. We compute miss ratios, run lengths, traffic ratios, cache residency times, degree of memory pollution and other statistics for a variety of designs, varying lock size, prefetching algorithm and write algorithm. We find that for this workload, sequential prefetching produces a significant (about 20%) but still limited improvement in the miss ratio, even using a powerful technique for detecting sequentiality. Copy-back writing decreased write traffic relative to write-through by more than 50%; periodic flushing of the dirty blocks increased write traffic only slightly compared to pure write-back, and then only for large cache sizes. Write-allocate had little effect compared to no-write-allocate. Block sizes of over a track don't appear to be useful. Limiting cache occupancy by a single process or transaction appears to have little effect. This study is unique in the variety and quality of the data used in the studies | LiveJournal's Backend and memcached: Past, Present, and Future | PI/OT: parallel I/O templates This paper presents a novel, top-down, high-level approach to parallelizing file I/O. Each parallel file descriptor is annotated with a high-level specification, or template, of the expected parallel behavior. The annotations are external to and independent of the source code. At run-time, all I/O using a parallel file descriptor adheres to the semantics of the selected template. By separating the parallel I/O specifications from the code, a user can quickly change the I/O behavior without rewriting the code. Templates can be composed hierarchically to construct complex access patterns. Two sample parallel programs using these templates are compared against versions implemented in an existing parallel I/O system (PIOUS). The sample programs show that the use of parallel I/O templates are beneficial from both the performance and software engineering points of view. | Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev... | Representing Concurrent Actions and Solving Conflicts As an extension of the well{known Action Description lan- guage A introduced by M. Gelfond and V. Lifschitz (7), C. Baral and M. Gelfond recently deflned the dialect AC which allows the descrip- tion of concurrent actions (1). Also, a sound but incomplete encoding of AC by means of an extended logic program was presented there. In this paper, we work on interpretations of contradictory inferences from par- tial action descriptions. Employing an interpretation difierent from the one implicitly used in AC , we present a new dialect A + C , which allows to infer non-contradictory information from contradictory descriptions and to describe nondeterminism and uncertainty. Furthermore, we give the flrst sound and complete encoding of AC , using equational logic programming, and extend it to A+C as well. | Automatic parallel I/O performance optimization in Panda Parallel I/O systems typically consist of individual processors, communication networks, and a large number of disks. Managing and utilizing these resources to meet performance, portability and usability goals of applications has become a significant challenge. Several parallel I/O system performance studies indicate that many factors, such as communication strategies used, the file system policies chosen, and the data storage layouts used, can affect the performance of a parallel I/O system. Without careful tuning of the performance knobs of the parallel I/O system for a target I/O workload in a target execution environment, problems such as load-imbalance and a communication bottleneck can occur, resulting in significant performance degradation and poor performance robustness; yet hand-tuning of the parallel I/O performance knobs for an anticipated I/O workload with a mix of different I/O patterns can be extremely difficult due to the complex interaction among different system modules and various tradeoffs among different performance knobs. We call these performance knobs ``the performance parameters'''' of the parallel I/O system in this thesis. This thesis presents an automatic parallel I/O performance optimization approach, a model-based approach under which the details of selecting appropriate I/O parameter settings for a given situation are handled internally by the optimization engine in the parallel I/O system without human intervention. To validate our hypothesis, we have built an optimizer that combines a rule-based approach with search algorithms such as simulated annealing to select optimal I/O parameter settings for a target I/O request sequence in a target execution environment in Panda, a parallel I/O library for collective I/O of multidimensional arrays. Our performance results obtained from two IBM SPs with significantly different configurations show that the Panda optimizer is able to select high-quality I/O parameter settings and deliver high performance under a variety of system configurations with a small optimization overhead. Our model-based automatic performance optimization approach includes two phases: first, when the optimization is invoked, the optimization engine is provided with a high level description of the target application I/O requests and the target platform characteristics. These descriptions tell the optimization engine what needs to be optimized (not how to optimize it). The workload characteristics include information such as the number and types of I/O requests issued and the sizes of the requests. The platform characteristics include the information such as the disk and processor speeds and file system bandwidth, etc. In the second phase, the optimization engine digests the description and selects a set of optimal I/O parameter settings using a performance model for the parallel I/O system and a set of optimization algorithms. The performance model is used to predict the system performance for dif | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.222026 | 0.044481 | 0.015867 | 0.008977 | 0.005514 | 0.002535 | 0.000847 | 0.000279 | 0.000113 | 0.000027 | 0 | 0 | 0 | 0 |
An improved parallel disk scheduling algorithm. We address the problems of prefetching and I/O schedul- ing for read-once reference strings in a parallel I/O sys- tem. Read-once reference strings, in which each block is accessed exactly once, arise naturally in applications like databases and video retrieval. Using the standard paral- lel disk model with disks and a shared I/O buffer of size , we present a novel algorithm, Red-Black Prefetching (RBP), for parallel I/O scheduling. The number of parallel I/Os performed by RBP is within O( ) of the minimum possible. Algorithm RBP is easy to implement and requires computationtime linear in the length of the reference string. Through simulationexperimentswe validated the benefits of RBP over simple greedy prefetching. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Learning internal representations Probably the most important problem in machinelearning is the preliminary biasing of alearner's hypothesis space so that it is smallenough to ensure good generalisation fromreasonable training sets, yet large enough thatit contains a good solution to the problem beinglearnt. In this paper a mechanism for automatically learning or biasing the learner's hypothesisspace is introduced. It works by firstlearning an appropriate internal representation for a learning environment and then... | A Bayesian/Information Theoretic Model of Learning to Learn via Multiple Task Sampling A Bayesian model of learning to learn by sampling from multiple tasks is presented. The multiple tasks are themselves generated by sampling from a distribution over an environment of related tasks. Such an environment is shown to be naturally modelled within a Bayesian context by the concept of an objective prior distribution. It is argued that for many common machine learning problems, although in general we do not know the true (objective) prior for the problem, we do have some idea of a set of possible priors to which the true prior belongs. It is shown that under these circumstances a learner can use Bayesian inference to learn the true prior by learning sufficiently many tasks from the environment. In addition, bounds are given on the amount of information required to learn a task when it is simultaneously learnt with several other tasks. The bounds show that if the learner has little knowledge of the true prior, but the dimensionality of the true prior is small, then sampling multiple tasks is highly advantageous. The theory is applied to the problem of learning a common feature set or equivalently a low-dimensional-representation (LDR) for an environment of related tasks. | Rational Kernels: Theory and Algorithms Many classification algorithms were originally designed for fixed-size vectors. Recent applications in text and speech processing and computational biology require however the analysis of variable-length sequences and more generally weighted automata. An approach widely used in statistical learning techniques such as Support Vector Machines (SVMs) is that of kernel methods, due to their computational efficiency in high-dimensional feature spaces. We introduce a general family of kernels based on weighted transducers or rational relations, rational kernels , that extend kernel methods to the analysis of variable-length sequences or more generally weighted automata. We show that rational kernels can be computed efficiently using a general algorithm of composition of weighted transducers and a general single-source shortest-distance algorithm. Not all rational kernels are positive definite and symmetric (PDS), or equivalently verify the Mercer condition, a condition that guarantees the convergence of training for discriminant classification algorithms such as SVMs. We present several theoretical results related to PDS rational kernels. We show that under some general conditions these kernels are closed under sum, product, or Kleene-closure and give a general method for constructing a PDS rational kernel from an arbitrary transducer defined on some non-idempotent semirings. We give the proof of several characterization results that can be used to guide the design of PDS rational kernels. We also show that some commonly used string kernels or similarity measures such as the edit-distance, the convolution kernels of Haussler, and some string kernels used in the context of computational biology are specific instances of rational kernels. Our results include the proof that the edit-distance over a non-trivial alphabet is not negative definite, which, to the best of our knowledge, was never stated or proved before. Rational kernels can be combined with SVMs to form efficient and powerful techniques for a variety of classification tasks in text and speech processing, or computational biology. We describe examples of general families of PDS rational kernels that are useful in many of these applications and report the result of our experiments illustrating the use of rational kernels in several difficult large-vocabulary spoken-dialog classification tasks based on deployed spoken-dialog systems. Our results show that rational kernels are easy to design and implement and lead to substantial improvements of the classification accuracy. | Describing Visual Scenes Using Transformed Objects and Parts We develop hierarchical, probabilistic models for objects, the parts composing them, and the visual scenes surrounding them. Our approach couples topic models originally developed for text analysis with spatial transformations, and thus consistently accounts for geometric constraints. By building integrated scene models, we may discover contextual relationships, and better exploit partially labeled training images. We first consider images of isolated objects, and show that sharing parts among object categories improves detection accuracy when learning from few examples. Turning to multiple object scenes, we propose nonparametric models which use Dirichlet processes to automatically learn the number of parts underlying each object category, and objects composing each scene. The resulting transformed Dirichlet process (TDP) leads to Monte Carlo algorithms which simultaneously segment and recognize objects in street and office scenes. | Statistical models for partial membership We present a principled Bayesian framework for modeling partial memberships of data points to clusters. Unlike a standard mixture model which assumes that each data point belongs to one and only one mixture component, or cluster, a partial membership model allows data points to have fractional membership in multiple clusters. Algorithms which assign data points partial memberships to clusters can be useful for tasks such as clustering genes based on microarray data (Gasch & Eisen, 2002). Our Bayesian Partial Membership Model (BPM) uses exponential family distributions to model each cluster, and a product of these distibtutions, with weighted parameters, to model each datapoint. Here the weights correspond to the degree to which the datapoint belongs to each cluster. All parameters in the BPM are continuous, so we can use Hybrid Monte Carlo to perform inference and learning. We discuss relationships between the BPM and Latent Dirichlet Allocation, Mixed Membership models, Exponential Family PCA, and fuzzy clustering. Lastly, we show some experimental results and discuss nonparametric extensions to our model. | Gaussian Processes for Regression The Bayesian analysis of neural networks is difficult because a simpleprior over weights implies a complex prior distribution overfunctions. In this paper we investigate the use of Gaussian processpriors over functions, which permit the predictive Bayesian analysisfor fixed values of hyperparameters to be carried out exactlyusing matrix operations. Two methods, using optimization and averaging(via Hybrid Monte Carlo) over hyperparameters have beentested on a number of challenging... | Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. Score matching (SM) and contrastive divergence (CD) are two recently proposed methods for estimation of nonnormalized statistical methods without computation of the normalization constant (partition function). Although they are based on very different approaches, we show in this letter that they are equivalent in a special case: in the limit of infinitesimal noise in a specific Monte Carlo method. Further, we show how these methods can be interpreted as approximations of pseudolikelihood. | Three new graphical models for statistical language modelling The supremacy of n-gram models in statistical language modelling has recently been challenged by parametric models that use distributed representations to counteract the difficulties caused by data sparsity. We propose three new probabilistic language models that define the distribution of the next word in a sequence given several preceding words by using distributed representations of those words. We show how real-valued distributed representations for words can be learned at the same time as learning a large set of stochastic binary hidden features that are used to predict the distributed representation of the next word from previous distributed representations. Adding connections from the previous states of the binary hidden features improves performance as does adding direct connections between the real-valued distributed representations. One of our models significantly outperforms the very best n-gram models. | Backpropagation Applied to Handwritten Zip Code Recognition. The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification. | Contractive Auto-Encoders: Explicit Invariance During Feature Extraction. | A Deep Learning Approach to DNA Sequence Classification. Deep learning neural networks are capable to extract significant features from raw data, and to use these features for classification tasks. In this work we present a deep learning neural network for DNA sequence classification based on spectral sequence representation. The framework is tested on a dataset of 16S genes and its performances, in terms of accuracy and F1 score, are compared to the General Regression Neural Network, already tested on a similar problem, as well as naive Bayes, random forest and support vector machine classifiers. The obtained results demonstrate that the deep learning approach outperformed all the other classifiers when considering classification of small sequence fragment 500 bp long. | Logic Programming and Reasoning with Incomplete Information The purpose of this paper is to expand the syntax and semanticsof logic programs and disjunctive databases to allow for the correctrepresentation of incomplete information in the presence of multipleextensions. The language of logic programs with classical negation,epistemic disjunction, and negation by failure is further expanded bynew modal operators K and M (where for the set of rules T and formulaF , KF stands for "F is known to be true by a reasoner with a set ofpremises T " and MF ... | Planning with sensing, concurrency, and exogenous events: logical framework and implementation The focus of current research in cognitive robotics is both on the realization of sys- tems based on known formal settings and on the extension of previous formal approaches to account for features that play a signifl- cant role for autonomous robots, but have not yet received an adequate treatment. In this paper we adopt a formal framework de- rived from Propositional Dynamic Logics by exploiting their formal correspondence with Description Logics, and present an extension of such a framework obtained by introducing both concurrency on primitive actions and autoepistemic operators for explicitly repre- senting the robot's epistemic state. We show that the resulting formal setting allows for the representation of actions with context- dependent efiects, sensing actions, and con- current actions, and address both the pres- ence of exogenous events and the characteri- zation of the notion of executable plan in such a complex setting. Moreover, we present an implementation of this framework in a system which is capable of generating plans that are actually executed on mobile robots, and illus- trate the experimentation of such a system in the design and implementation of soccer players for the 1999 Robocup competition. | Editorial introduction to the Neural Networks special issue on Deep Learning of Representations. | 1.031469 | 0.02934 | 0.028588 | 0.028588 | 0.028588 | 0.014389 | 0.008219 | 0.00388 | 0.000718 | 0.000007 | 0.000001 | 0 | 0 | 0 |
Impacts of Indirect Blocks on Buffer Cache Energy Efficiency Indirect blocks, part of a file's metadata used for locating this file's data blocks, are typically treated indistinguishably from file's data blocks in buffer cache. This paper shows that this conventional approach will significantly detriment the overall energy efficiency of memory systems. Scattering small but frequently accessed indirected blocks over allmemory chips reduce the energy saving opportunities. We propose a new energy-efficient buffer cache management scheme, named MEEP, which separates indirect and datablocks into different memory chips. Our trace-driven simulation results show that our new scheme can save memory energy up to 16.8% and 15.4% in the I/O-intensive server workloads TPC-R and TPC-H, respectively. | An Implementation of Page Allocation Shaping for Energy Efficiency Main memory in many tera-scale systems requires tens of kilowatts of power. The resulting energy consumption increases system cost and the heat produced reduces reliability. Emergent memory technologies will provide systems the ability to dynamically turn-on (online) and turn-off (offline) memory devices at runtime. This technology, coupled with slack in memory demand, offers the potential for significant energy savings in clusters of servers. However, to realize these energy savings, OS-level memory allocation and management techniques must be modified to minimize the number of active memory devices while satisfying application demands. We propose several page shaping techniques and structural enhancements to proactively and reactively direct allocations to a minimal number of devices. To evaluate these techniques on real systems, we implemented these shaping techniques in the Linux kernel. Experiments using our OS extensions coupled with a simple history-based heuristic (to track demand and control state transitions) yield up to 60% energy savings with less than 1% performance loss for various benchmarks including lmbench and SPEC. | Joint power management of memory and disk The paper presents a scheme to combine memory and power management for achieving better energy reduction. Our method periodically adjusts the size of physical memory and the timeout value to shut down a hard disk for reducing the average power consumption. We use Pareto distributions to model the distributions of idle time. The parameters of the distributions are adjusted at run-time for calculating the corresponding timeout value of the disk power management. The memory size is changed based on the inclusion property to predict the number of disk accesses at different memory sizes. Experimental results show more than 50% energy savings compared to a 2-competitive fixed-timeout method. | Program-counter-based pattern classification in buffer caching Program-counter-based (PC-based) prediction techniques have been shown to be highly effective and are widely used in computer architecture design. In this paper, we explore the opportunity and viability of applying PC-based prediction to operating systems design, in particular, to optimize buffer caching. We propose a Program-Counterbased Classification (PCC) technique for use in pattern-based buffer caching that allows the operating system to correlate the I/O operations with the program context in which they are issued via the program counters of the call instructions that trigger the I/O requests. This correlation allows the operating system to classify I/O access pattern on a per-PC basis which achieves significantly better accuracy than previous per-file or per-application classification techniques. PCC also performs classification more quickly as per-PC pattern just needs to be learned once. We evaluate PCC via trace-driven simulations and an implementation in Linux, and compare it to UBM, a state-of-the-art pattern-based buffer replacement scheme. The performance improvements are substantial: the hit ratio improves by as much as 29.3% (with an average of 13.8%), and the execution time is reduced by as much as 29.0% (with an average of 13.7%). | Extended stable semantics for normal and disjunctive programs | A neural probabilistic language model A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts. | On the scale and performance of cooperative Web proxy caching Abstract While algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative- caching performance,in the large-scale World Wide Web en- vironment. This paper uses both trace-based analysis and analytic modelling,to show,the potential advantages and drawbacks of inter-proxy cooperation. With our traces, we evaluate quantitatively the performance-improvement po- tential of cooperation between 200 small-organization prox- ies within a university environment, and between two large- organization proxies handling 23,000 and 60,000 clients, re- spectively. With our model, we extend beyond these popula- tions to project cooperative caching behavior in regions with millions of clients. Overall, we demonstrate that cooperative caching has performance,benefits only within limited popu- lation bounds. We also use our model to examine the impli- cations of future trends in Web-access behavior and traffic. | A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle. | The Boolean hierarchy: hardware over NP In this paper, we study the complexity of sets formed by boolean operations $(\bigcup, \bigcap,$ and complementation) on NP sets. These are the sets accepted by trees of hardware with NP predicates as leaves, and together form the boolean hierarchy. We present many results about the boolean hierarchy: separation and immunity results, complete languages, upward separations, connections to sparse oracles for NP, and structural asymmetries between complementary classes. Some results present new ideas and techniques. Others put previous results about NP and $D^{P}$ in a richer perspective. Throughout, we emphasize the structure of the boolean hierarchy and its relations with more common classes. | A Stable Distributed Scheduling Algorithm | Encoding Planning Problems in Nonmonotonic Logic Programs . We present a framework for encoding planning problemsin logic programs with negation as failure, having computational efficiencyas our major consideration. In order to accomplish our goal, webring together ideas from logic programming and the planning systemsgraphplan and satplan. We discuss different representations of planningproblems in logic programs, point out issues related to their performance,and show ways to exploit the structure of the domains in theserepresentations.... | Optimizing the Embedded Caching and Prefetching Software on a Network-Attached Storage System As the speed gap between memory and disk is so large today, caching and prefetch are critical to enterprise class storage applications, which demands high performance. In this paper, we present our study on performance of a mid-range storage server produced by the Quanta Computer Incorporation. We first analyzed the existing caching mechanism in the server and then developed a fast caching methodology to reduce the cache access latency and processing overhead of the storage controller. In addition, we proposed a new adaptive prefetch scheme reduces the average disk access time seen by the host. Via trace-driven simulation, we evaluated the performance of our new caching and adaptive prefetch schemes. Our results showed the performance improvement for the TPC-C on-line transaction benchmark. | Scheduling parallel I/O operations The I/O bottleneck in parallel computer systems has recently begun receiving increasing interest. Most attention has focused on improving the performance of I/O devices using fairly low-level parallelism in techniques such as disk striping and interleaving. Widely applicable solutions, however, will require an integrated approach which addresses the problem at multiple system levels, including applications, systems software, and architecture. We propose that within the context of such an integrated approach, scheduling parallel I/O operations will become increasingly attractive and can potentially provide substantial performance benefits.We describe a simple I/O scheduling problem and present approximate algorithms for its solution. The costs of using these algorithms in terms of execution time, and the benefits in terms of reduced time to complete a batch of I/O operations, are compared with the situations in which no scheduling is used, and in which an optimal scheduling algorithm is used. The comparison is performed both theoretically and experimentally. We have found that, in exchange for a small execution time overhead, the approximate scheduling algorithms can provide substantial improvements in I/O completion times. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.2 | 0.2 | 0.05 | 0.006452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks. We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchi... | Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene. Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for succe... | Automatic muscle perimysium annotation using deep convolutional neural network Diseased skeletal muscle expresses mononuclear cell infiltration in the regions of perimysium. Accurate annotation or segmentation of perimysium can help biologists and clinicians to determine individualized patient treatment and allow for reasonable prognostication. However, manual perimysium annotation is time consuming and prone to inter-observer variations. Meanwhile, the presence of ambiguous patterns in muscle images significantly challenge many traditional automatic annotation algorithms. In this paper, we propose an automatic perimysium annotation algorithm based on deep convolutional neural network (CNN). We formulate the automatic annotation of perimysium in muscle images as a pixel-wise classification problem, and the CNN is trained to label each image pixel with raw RGB values of the patch centered at the pixel. The algorithm is applied to 82 diseased skeletal muscle images. We have achieved an average precision of 94% on the test dataset. | Sparseness Analysis in the Pretraining of Deep Neural Networks A major progress in deep multilayer neural networks (DNNs) is the invention of various unsupervised pretraining methods to initialize network parameters which lead to good prediction accuracy. This paper presents the sparseness analysis on the hidden unit in the pretraining process. In particular, we use the L₁-norm to measure sparseness and provide some sufficient conditions for that pretraining leads to sparseness with respect to the popular pretraining models--such as denoising autoencoders (DAEs) and restricted Boltzmann machines (RBMs). Our experimental results demonstrate that when the sufficient conditions are satisfied, the pretraining models lead to sparseness. Our experiments also reveal that when using the sigmoid activation functions, pretraining plays an important sparseness role in DNNs with sigmoid (Dsigm), and when using the rectifier linear unit (ReLU) activation functions, pretraining becomes less effective for DNNs with ReLU (Drelu). Luckily, Drelu can reach a higher recognition accuracy than DNNs with pretraining (DAEs and RBMs), as it can capture the main benefit (such as sparseness-encouraging) of pretraining in Dsigm. However, ReLU is not adapted to the different firing rates in biological neurons, because the firing rate actually changes along with the varying membrane resistances. To address this problem, we further propose a family of rectifier piecewise linear units (RePLUs) to fit the different firing rates. The experimental results show that the performance of RePLU is better than ReLU, and is comparable with those with some pretraining techniques, such as RBMs and DAEs. | Learning deep hierarchical visual feature coding. In this paper, we propose a hybrid architecture that combines the image modeling strengths of the bag of words framework with the representational power and adaptability of learning deep architectures. Local gradient-based descriptors, such as SIFT, are encoded via a hierarchical coding scheme composed of spatial aggregating restricted Boltzmann machines (RBM). For each coding layer, we regularize the RBM by encouraging representations to fit both sparse and selective distributions. Supervised fine-tuning is used to enhance the quality of the visual representation for the categorization task. We performed a thorough experimental evaluation using three image categorization data sets. The hierarchical coding scheme achieved competitive categorization accuracies of 79.7% and 86.4% on the Caltech-101 and 15-Scenes data sets, respectively. The visual representations learned are compact and the model's inference is fast, as compared with sparse coding methods. The low-level representations of descriptors that were learned using this method result in generic features that we empirically found to be transferrable between different image data sets. Further analysis reveal the significance of supervised fine-tuning when the architecture has two layers of representations as opposed to a single layer. | Extended stable semantics for normal and disjunctive programs | The design and implementation of VAMPIRE In this article we describe VAMPIRE: a high-performance theorem prover for first-order logic. As our description is mostly targeted to the developers of such systems and specialists in automated reasoning, it focuses on the design of the system and some key implementation features. We also analyze the performance of the prover at CASC-JC. | The SPHINX-II Speech Recognition System: An Overview In order for speech recognizers to deal with increased task perplexity, speaker variation, and environment variation, improved speech recognition is critical. Steady progress has been made along these three dimensions at Carnegie Mellon. In this paper, we review the SPHINX-II speech recognition system and summarize our recent efforts on improved speech recognition. | Dependent Fluents We discuss the persistence of the indirect ef fects of an action—the question when such ef fects are subject to the commonsense law of in ertia, and how to describe their evolution in the cases when inertia does not apply. Our model of nonpersistent effects involves the assumption that the value of the fluent in question is deter mined by the values of other fluents, although the dependency may be partially or completely unknown. This view leads us to a new high- level action language ARD (for Actions, Ram ifications and Dependencies) that is capable of describing both persistent and nonpersistent ef fects. Unlike the action languages introduced in the past, ARD is "non-Markovia n," in the sense that the evolution of the fluents described in this language may depend on their history, and not only on their current values. | Monotonic reductions, representative equivalence, and compilation of intractable problems The idea of preprocessing part of the input of a problem in order to improve efficiency has been employed by several researchers in several areas of computer science. In this article, we show sufficient conditions to prove that an intractable problem cannot be efficiently solved even allowing an exponentially long preprocessing phase. The generality of such conditions is shown by applying them to various problems coming from different fields. While the results may seem to discourage the use of compilation, we present some evidence that such negative results are useful in practice. | Circuit definitions of nondeterministic complexity classes We consider restictions on Boolean circuits and use them to obtain new uniform circuit characterizations of nondeterministic space and time classes. We also obtain characterizations of counting classes based on nondeterministic time bounded computations on the arithmetic circuit model. It is shown how the notion of semiunboundedness unifies the definitions of many natural complexity classes. | A comorbidity-based recommendation engine for disease prediction A recommendation engine for disease prediction that combines clustering and association analysis techniques is proposed. The system produces local prediction models, specialized on subgroups of similar patients by using the past patient medical history, to determine the set of possible illnesses an individual could develop. Each model is generated by using the set of frequent diseases that contemporarily appear in the same patient. The illnesses a patient could likely be affected in the future are obtained by considering the items induced by high confidence rules generated by the frequent diseases. Experimental results show that the proposed approach is a feasible way to diagnose diseases. | On the Complexity of Plan Adaptation by Derivational Analogy in a Universal Classical Planning Framework In this paper we present an algorithm called DerUCP, which can be regarded as a general model for plan adaptation using Derivational Analogy. Using DerUCP, we show that previous results on the complexity of plan adaptation do not apply to Derivational Analogy. We also show that Derivational Analogy can potentially produce exponential reductions in the size of the search space generated by a planning system. | Mobile Robot Control Using a Cloud of Particles. Common control systems for mobile robots include the use of deterministic control laws together with state estimation approaches and the consideration of the certainty equivalence principle. Recent approaches consider the use of partially observable Markov decision process strategies together with Bayesian estimators. In order to reduce the required processing power and yet allow for multimodal or non-Gaussian distributions, a scheme based on a particle filter and a corresponding cloud of input signals is proposed in this paper. Results are presented and compared to a scheme with extended Kalman filter and the assumption that the certainty equivalence holds. | 1.2 | 0.2 | 0.2 | 0.1 | 0.025 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Learning potential functions by demonstration for path planning Potential functions can be used to design efficient path planning schemes. However, it is often difficult to design appropriate potential functions to mimic desired behavior of the agent. Instead of using a pre-designed potential function for path planning, this paper presents an algorithm that learns the underlying potential function from a given sample trajectory generated by a “expert” (say, a human). This underlying potential function implicitly incorporates obstacle avoidance information that may be intuitive or experience-based. The potential function to be learned is parametrized and the parameter weights are obtained through minimization of a well-designed cost function via a gradient descent search algorithm. Once learned, this potential function can be used for path planning in case of alternative (and more complex) scenarios, such as those with multiple obstacles. The paper presents the theoretical foundation and numerical validation of the proposed algorithm. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Analysis of MIA GMDH as a self-organizing deep neural network In this paper, most popular deep feed-forward deterministic supervised neural networks are considered and the multilayered iterative GMDH algorithm as a self-organizing deep neural network is analyzed. Brief comparison of main features of the GMDH neural network and some other deep feed-forward neural networks are also given. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Applying deep learning on packet flows for botnet detection. Botnets constitute a primary threat to Internet security. The ability to accurately distinguish botnet traffic from non-botnet traffic can help significantly in mitigating malicious botnets. We present a novel approach to botnet detection that applies deep learning on flows of TCP/UDP/IP-packets. In our experimental results with a large dataset, we obtained 99.7% accuracy for classifying P2P-botnet traffic. This is comparable to or better than conventional botnet detection approaches, while reducing efforts for feature engineering and feature selection to a minimum. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Integrated Software Fingerprinting via Neural-Network-Based Control Flow Obfuscation. Dynamic software fingerprinting has been an important tool in fighting against software theft and pirating by embedding unique fingerprints into software copies. However, the existing work uses the methods from dynamic software watermarking as direct solutions, in which the secret marks are inside rather independent code modules attached to the software. This results in an intrinsic weakness again... | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
On the structure of bounded queries to arbitrary NP sets In [Kad87b], Kadin showed that if the Polynomial Hierarchy (PH) has infinitely many levels, then for all $k$, $P^{SAT[k]} \subseteq P^{SAT[k+1]}$. In this paper, we extend Kadin''s technique to show that a proper query hierarchy is not an exclusive property of SAT. In fact, for any $A \in NP \overbrace{low_{3}}$, if PH is infinite, then $P^{A[k]} \subseteq P^{A[k+1]}$. Moreover, for the case of parallel queries, we show that $P^{A||[k+1]}$ is not contained in $P^{SAT||[k]}$. We claim that having a proper query hierarchy is a consequence of the oracle access mechanism and not a result of the ``hardness'''' of a set. To support this claim, we show that assuming PH is infinite, one can construct an intermediate set $B \in NP$ so that $P^{B[k+1]} \subseteq P^{SAT[k]}$. That is, the query hierarchy for $B$ grows as ``tall'''' as the query hierarchy for SAT. In addition, $B$ is intermediate, so it is not ``hard'''' in any sense (e.g., not NP hard under many-one, Turing, or strong nondeterministic reductions). Using these same techniques, we explore some other questions about query hierarchies. For example, we show that is there exists any $A$ such that $P^{A[2]} = P^{SAT[1]}$ then PH collapses to $\Delta^{P}_{3}$. | Lower bounds for constant depth circuits in the presence of help bits The problem of how many extra bits of `help' a constant depth circuit needs in order to compute m functions is considered. Each help bit can be an arbitrary Boolean function. An exponential lower bound on the size of the circuit computing m parity functions in the presence of m-1 help bits is proved. The proof is carried out using the algebraic machinery of A. Razborov (1987) and R. Smolensky (1987). A by-product of the proof is that the same bound holds for circuits with modp gates for a fixed prime p>2. The lower bound implies a random oracle separation for PH and PSPACE, which is optimal in a technical sense | Polynomial terse sets Let A be a set and k ∈ N be such that we wish to know the answers to x 1 ∈ A ?, x 2 ∈ A ?, …, x k ∈ A ? for various k -tuples 〈 x 1 , x 2 , …, x k 〉. If this problem requires k queries to A in order to be solved in polynomial time then A is called polynomial terse or pterse . We show the existence of both arbitrarily complex pterse and non-pterse sets; and that P ≠ NP iff every NP-complete set is pterse. We also show connections with p -immunity, p -selective, p -generic sets, and the boolean hierarchy. In our framework unique satisfiability (and a variation of it called k SAT is, in some sense, “close” to satisfiability. | Bounded query computations A survey is given of directions, results, and methods in the study of complexity-bounded computations with a restricted number of queries to an oracle. In particular, polynomial-time-bounded computations with an NP oracle are considered. The main topics are: the relationship between the number of adaptive and parallel queries, connections to the closure of NP under polynomial-time truth-table reducibility, the Boolean hierarchy, the power of one more query, sparse oracles versus few queries, and natural complete problems for the most important bounded query classes | Bounded queries to SAT and the Boolean hierarchy We study the complexity of decision problems that can be solved by a polynomial-time Turing machine that makes a bounded number of queries to an NP oracle. Depending on whether we allow some queries to depend on the results of other queries, we obtain two (probably) different hierarchies. We present several results relating the bounded NP query hierarchies to each other and to the Boolean hierarchy. We also consider the similarly defined hierarchies of functions that can be computed by a polynomial-time Turing machine that makes a bounded number of queries to an NP oracle. We present relations among these two hierarchies and the Boolean hierarchy. In particular we show for all k that there are functions computable with 2 k parallel queries to an NP set that are not computable in polynomial time with k serial queries to any oracle, unless P = NP. As a corollary k + 1 parallel queries to an NP set allow us to compute more functions than are computable with only k parallel queries to an NP set, unless P = NP; the same is true of serial queries. Similar results hold for all tt-self-reducible sets. Using a “mind-change” technique, we show that 2 k - 1 parallel queries to an NP set allow us to accept in polynomial time exactly the same sets as can be accepted in polynomial time with k serial queries to an NP set. (In fact, the same is true for any class in place of NP that is closed under polynomial-time positive-bounded-truth-table reductions.) This contrasts with the expected result for function computations with an NP oracle (Beigel, 1988). In addition we show that the Boolean hierarchy and the bounded query hierarchies (of languages) either stand or collapse together. Finally we show that if the Boolean hierarchy collapses to any level but the zeroth (deterministic polynomial time), then for all k there are functions computable in polynomial time with k parallel queries to an NP set that are not computable in polynomial time with k - 1 serial queries to any set (NP-complete sets are p-superterse). | The Boolean hierarchy I: structural properties | Simultaneous Strong Separations of Probabilistic and Unambiguous Complexity Classes We study the relationship between probabilistic and unambiguous computation, and provide strong relativized evidence that they are incomparable. In particular, we display a relativized world in which the complexity classes embodying these paradigms of computation are mutually immune. We answer questions formulated in|and extend the line of research opened by|Geske and Grollman (15) and Balcazar and Russo (3). | Optimization Problems And The Polynomial Hierarchy It is demonstrated that such problems as the symmetric Travelling Salesman Problem, Chromatic Number Problem, Maximal Clique Problem and a Knapsack Packing Problem are in the Δ P 2 level of PH and no lower if ∑ P 1 ≠ Π P 1 , or NP≠co-NP. This shows that these problems cannot be solved by polynomial reductions that use only positive information from an NP oracle, if NP≠co-NP. It is then shown how to extend these results to prove that interesting problems are properly in Δ P, X i +1 for all X , k where ∑ P, X k ≠ Π P, X k in PH X . | Two forms of dependence in propositional logic: controllability and definability We investigate two forms of dependence between variables and/or formulas within a propositional knowledge base: controllability (a set of variables X controls a formula 驴 if there is a way to fix the truth value of the variables in X in order to achieve 驴 to have a prescribed truth value) and definability (X defines a variable y if every truth assignment of the variables in X enables us finding out the truth value of y). Several characterization results are pointed out, complexity issues are analyzed, and some applications of both notions, including decision under incomplete knowledge and/or partial observability, and hypothesis discrimination, are sketched. | Complexity Results for Serial Decomposability Korf (1985) presents a method for learning macro-operators and shows that the method is applicable to serially decomposable problems. In this paper I analyze the computational complexity of serial decomposability. Assuming that operators take polynomial time, it is NP-complete. to determine if an operator (or set of operators) is not serially decomposable, whether or not an ordering of state variables is given. In addition to serial decomposability of operators, a serially decomposable problem requires that the set of solvable states is closed under the operators. It is PSPACE-complete to determine if a given "finite state-variable problem" is serially decomposable. In fact, every solvable instance of a PSPACE problem can be converted to a serially decomposable problem. Furthermore, given a bound on the size of the input, every problem in PSPACE can be transformed to a problem that is nearly serially-decomposable, i.e., the problem is serially decomposable except for closure of solvable states or a unique goal state. | Formulating diagnostic problem solving using an action language with narratives and sensing Given a system and unexpected observations about the system, a diagnosis is often viewed as a fault assignment to the various components of the system that is consistent with (or that explains) the observations. If the observations occur over time, and if we allow the occurrence of (deliberate) actions and (exogenous) events, then the traditional notion of a candidate diagnosis must be modified to consider the possible occurrence of actions and events that could account for the unexpected... | What regularized auto-encoders learn from the data-generating distribution. What do auto-encoders learn about the underlying data-generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data-generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input). It contradicts previous interpretations of reconstruction error as an energy function. Unlike previous results, the theorems provided here are completely generic and do not depend on the parameterization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood because it does not involve a partition function. Finally, we show how an approximate Metropolis-Hastings MCMC can be setup to recover samples from the estimated distribution, and this is confirmed in sampling experiments. | Read Optimized File System Designs: A Performance Evaluation This paper presents a performance comparison of several file system allocation policies. The file systems are designed to provide high bandwidth between disks and main memory by taking advantage of parallelism in an underlying disk array, catering to large units of transfer, and minimizing the bandwidth dedicated to the transfer of meta data. All of the file systems described use a mul- tiblock allocation strategy which allows both large and small files to be allocated efficiently. Simulation results show that these multiblock policies result in systems that are able to utilize a large percentage of the underlying disk bandwidth; more than 90% in sequential cases. As general purpose systems are called upon to support more data intensive applications such as databases and super- computing, these policies offer an opportunity to provide superior performance to a larger class of users. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.051322 | 0.034991 | 0.026436 | 0.016767 | 0.010583 | 0.002519 | 0.00028 | 0.000098 | 0.000004 | 0 | 0 | 0 | 0 | 0 |
Extraction of Features for Lip-reading Using Autoencoders. We study the incorporation of facial depth data in the task of isolated word visual speech recognition. We propose novel features based on unsupervised training of a single layer autoencoder. The features are extracted from both video and depth channels obtained by Microsoft Kinect device. We perform all experiments on our database of 54 speakers, each uttering 50 words. We compare our autoencoder features to traditional methods such as DCT or PCA. The features are further processed by simplified variant of hierarchical linear discriminant analysis in order to capture the speech dynamics. The classification is performed using a multi-stream Hidden Markov Model for various combinations of audio, video, and depth channels. We also evaluate visual features in the join audio-video isolated word recognition in noisy environments. English | Image Denoising and Inpainting with Deep Neural Networks. We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method achieves state-of-the-art performance in the image denoising task. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning. | Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations. | Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev... | Extended stable semantics for normal and disjunctive programs | A neural probabilistic language model A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts. | On the scale and performance of cooperative Web proxy caching Abstract While algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative- caching performance,in the large-scale World Wide Web en- vironment. This paper uses both trace-based analysis and analytic modelling,to show,the potential advantages and drawbacks of inter-proxy cooperation. With our traces, we evaluate quantitatively the performance-improvement po- tential of cooperation between 200 small-organization prox- ies within a university environment, and between two large- organization proxies handling 23,000 and 60,000 clients, re- spectively. With our model, we extend beyond these popula- tions to project cooperative caching behavior in regions with millions of clients. Overall, we demonstrate that cooperative caching has performance,benefits only within limited popu- lation bounds. We also use our model to examine the impli- cations of future trends in Web-access behavior and traffic. | A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle. | Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, IJCAI 99, Stockholm, Sweden, July 31 - August 6, 1999. 2 Volumes, 1450 pages | A Completeness Result for SLDNF-Resolution Because of the possibility of floundering and infinite derivations, SLDNF-resolution is, in general, not complete. The classical approach [17] to get a completeness result is to restrict the attention to normal programs P and normal goals G, such that P or {G} is allowed and P is hierarchical. Unfortunately, the class of all normal programs and all normal goals meeting these requirements is not powerful enough to be of great practical importance. But after refining the concept of allowedness by taking modes [12] into account, we can broaden the notion of a hierarchical program, and thereby define a subclass of the class of normal programs and normal goals which is powerful enough to compute all primitive recursive functions without losing the completeness of SLDNF-resolution. | A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system. | iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings. | When Multivariate Forecasting Meets Unsupervised Feature Learning - Towards a Novel Anomaly Detection Framework for Decision Support. Many organizations adopt information technologies to make intelligent decisions during operations. Time-series data plays a crucial role in supporting such decision making processes. Though current studies on time-series based decision making provide reasonably well results, the anomaly detection essence underling most of the scenarios and the plenitude of unlabeled data are largely overlooked and left unexplored. We argue that by using multivariate forecasting and unsupervised feature learning, these two important research gaps could be filled. We carried out two experiments in this study to testify our approach and the results showed that decision support performance was significantly improved. We also proposed a novel framework to integrate the two methods so that our approach may be generalized to a larger problem domain. We discussed the advantages, the limitations and the future work of our study. Both practical and theoretical contributions were also discussed in the paper. © 2012 by the AIS/ICIS Administrative Office All rights reserved. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1.2 | 0.010526 | 0.00084 | 0.000615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Efficient XML storage based on DTM for read-oriented workloads We propose an XML storage scheme based on Docu- ment Table Model (DTM) which expresses an XML doc- ument as a table form. When performing query process- ing on large scale XML data, XML storage schemes on secondary storage and their access methods greatly affect the entire performance. For this reason, we developed an XQuery processing scheme in which an XML document is internally represented as a set of DTM blocks and can be directly stored on secondary storage. Our scheme is tai- lored for read-oriented workloads, and an XML document is stored on disks as arrays of nodes. We analyzed the ac- tual data access patterns to DTM appeared in processing XML queries, and employed the combination of informed prefetching and scan-resistant buffer management based on the analysis. Our experimental results showed that our stor- age scheme outperforms competing schemes with respect to I/O-intensive workloads, and our sophisticated prefetching and caching increase overall throughput without significant drawbacks. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Online diagnosis of hard faults in microprocessors We develop a microprocessor design that tolerates hard faults, including fabrication defects and in-field faults, by leveraging existing microprocessor redundancy. To do this, we must: detect and correct errors, diagnose hard faults at the field deconfigurable unit (FDU) granularity, and deconfigure FDUs with hard faults. In our reliable microprocessor design, we use DIVA dynamic verification to detect and correct errors. Our new scheme for diagnosing hard faults tracks instructions' core structure occupancy from decode until commit. If a DIVA checker detects an error in an instruction, it increments a small saturating error counter for every FDU used by that instruction, including that DIVA checker. A hard fault in an FDU quickly leads to an above-threshold error counter for that FDU and thus diagnoses the fault. For deconfiguration, we use previously developed schemes for functional units and buffers and present a scheme for deconfiguring DIVA checkers. Experimental results show that our reliable microprocessor quickly and accurately diagnoses each hard fault that is injected and continues to function, albeit with somewhat degraded performance. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | Parameterized complexity for the database theorist | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Deep Nonlinear Metric Learning for 3-D Shape Retrieval. Effective 3-D shape retrieval is an important problem in 3-D shape analysis. Recently, feature learning-based shape retrieval methods have been widely studied, where the distance metrics between 3-D shape descriptors are usually hand-crafted. In this paper, motivated by the fact that deep neural network has the good ability to model nonlinearity, we propose to learn an effective nonlinear distance... | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Preprocessing Techniques for QBFs | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Complexity of Shallow Networks Representing Functions with Large Variations. | Can Two Hidden Layers Make a Difference? Representations of multivariable Boolean functions by one and two-hidden-layer Heaviside perceptron networks are investigated. Sufficient conditions are given for representations with the numbers of network units depending on the input dimension d linearly and polynomially. Functions with such numbers depending on d exponentially or having some weights exponentially large are described in terms of properties of their communication matrices. A mathematical formalization of the concept of "highly-varying functions" is proposed. There is given an example of such function which can be represented by a network with two hidden layers with merely d units. | On the Complexity of Neural Network Classifiers: A Comparison Between Shallow and Deep Architectures Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems. | Model complexities of shallow networks representing highly varying functions Model complexities of shallow (i.e., one-hidden-layer) networks representing highly varying multivariable { - 1 , 1 } -valued functions are studied in terms of variational norms tailored to dictionaries of network units. It is shown that bounds on these norms define classes of functions computable by networks with constrained numbers of hidden units and sizes of output weights. Estimates of probabilistic distributions of values of variational norms with respect to typical computational units, such as perceptrons and Gaussian kernel units, are derived via geometric characterization of variational norms combined with the probabilistic Chernoff Bound. It is shown that almost any randomly chosen { - 1 , 1 } -valued function on a sufficiently large d-dimensional domain has variation with respect to perceptrons depending on d exponentially. | Shallow vs. Deep Sum-Product Networks. We investigate the representational power of sum-product networks (computation networks analogous to neural networks, but whose individual units compute either products or weighted sums), through a theoretical analysis that compares deep (multiple hidden layers) vs. shallow (one hidden layer) architectures. We prove there exist families of functions that can be represented much more efficiently with a deep network than with a shallow one, i.e. with substantially fewer hidden units. Such results were not available until now, and contribute to motivate recent research involving learning of deep sum-product networks, and more generally motivate research in Deep Learning. | Learning eigenfunctions links spectral embedding and kernel PCA. In this letter, we show a direct relation between spectral embedding methods and kernel principal components analysis and how both are special cases of a more general learning problem: learning the principal eigenfunctions of an operator defined from a kernel and the unknown data-generating density. Whereas spectral embedding methods provided only coordinates for the training points, the analysis justifies a simple extension to out-of-sample examples (the Nyström formula) for multidimensional scaling (MDS), spectral clustering, Laplacian eigenmaps, locally linear embedding (LLE), and Isomap. The analysis provides, for all such spectral embedding methods, the definition of a loss function, whose empirical average is minimized by the traditional algorithms. The asymptotic expected value of that loss defines a generalization performance and clarifies what these algorithms are trying to learn. Experiments with LLE, Isomap, spectral clustering, and MDS show that this out-of-sample embedding formula generalizes well, with a level of error comparable to the effect of small perturbations of the training set on the embedding. | Deep learning via semi-supervised embedding We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques. | Greedy Layer-Wise Training of Deep Networks Deep multi-layer neural networks have many levels of non-linearities, which allows them to potentially represent very compactly highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. | Indexing By Latent Semantic Analysis | A linear time algorithm for finding tree-decompositions of small treewidth In this paper, we give for constant k a linear-time algorithm that, given a graph G = (V, E), determines whether the treewidth of G is at most k and, if so, finds a tree-decomposition of G with treewidth at most k. A consequence is that every minor-closed class of graphs that does not contain all planar graphs has a linear-time recognition algorithm. Another consequence is that a similar result holds when we look instead for path-decompositions with pathwidth at mast some constant k. | Using dynamic sets to overcome high I/O latencies during search Describes a single unifying abstraction called 'dynamic sets', which can offer substantial benefits to search applications. These benefits include greater opportunity in the I/O subsystem to aggressively exploit prefetching and parallelism, as well as support for associative naming to complement the hierarchical naming in typical file systems. This paper motivates dynamic sets and presents the design of a system that embodies this abstraction. | A new approach to I/O performance evaluation: self-scaling I/O benchmarks, predicted I/O performance Current I/O benchmarks suffer from several chronic problems: they quickly become obsolete, they do not stress the I/O system, and they do not help in understanding I/O system performance. We propose a new approach to I/O performance analysis. First, we propose a self-scaling benchmark that dynamically adjusts aspects of its workload according to the performance characteristic of the system being measured. By doing so, the benchmark automatically scales across current and future systems. The evaluation aids in understanding system performance by reporting how performance varies according to each of fie workload parameters. Second, we propose predicted performance, a technique for using the results from the self-scaling evaluation to quickly estimate the performance for workloads that have not been measured. We show that this technique yields reasonably accurate performance estimates and argue that this method gives a far more accurate comparative performance evaluation than traditional single point benchmarks. We apply our new evaluation technique by measuring a SPARCstation 1+ with one SCSI disk, an HP 730 with one SCSI-II disk, a Sprite LFS DECstation 5000/200 with a three-disk disk array, a Convex C240 minisupercomputer with a four-disk disk array, and a Solbourne 5E/905 fileserver with a two-disk disk array. | Exploiting Web Log Mining for Web Cache Enhancement Improving the performance of the Web is a crucial requirement, since its popularity resulted in a large increase in the user perceived latency. In this paper, we describe a Web caching scheme that capitalizes on prefetching. Prefetching refers to the mechanism of deducing forthcoming page accesses of a client, based on access log information. Web log mining methods are exploited to provide effective prediction of Web-user accesses. The proposed scheme achieves a coordination between the two techniques (i.e., caching and prefetching). The prefetched documents are accommodated in a dedicated part of the cache, to avoid the drawback of incorrect replacement of requested documents. The requirements of the Web are taken into account, compared to the existing schemes for buffer management in database and operating systems. Experimental results indicate the superiority of the proposed method compared to the previous ones, in terms of improvement in cache performance. | Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture. | 1.2118 | 0.036383 | 0.01244 | 0.00675 | 0.000433 | 0.000121 | 0.000032 | 0.000013 | 0.000002 | 0 | 0 | 0 | 0 | 0 |
Multilayer Perceptron and Stacked Autoencoder for Internet Traffic Prediction. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Random decision forests Decision trees are attractive classifiers due to their high execution speed. But trees derived with traditional methods often cannot be grown to arbitrary complexity for possible loss of generalization accuracy on unseen data. The limitation on complexity usually means suboptimal accuracy on training data. Following the principles of stochastic modeling, we propose a method to construct tree-based classifiers whose capacity can be arbitrarily expanded for increases in accuracy for both training and unseen data. The essence of the method is to build multiple trees in randomly selected subspaces of the feature space. Trees in, different subspaces generalize their classification in complementary ways, and their combined classification can be monotonically improved. The validity of the method is demonstrated through experiments on the recognition of handwritten digits | Nonlocal estimation of manifold structure. We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation invites an exploration of nonlocal manifold learning algorithms that attempt to discover shared structure in the tangent planes at different positions. A training criterion for such an algorithm is proposed, and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to generalize very far from training data (on learning handwritten character image rotations), where local nonparametric methods fail. | Global Data Analysis and the Fragmentation Problem in Decision Tree Induction We investigate an inherent limitation of top-down decision tree induction in which the continuous partitioning of the instance space progressively lessens the statistical support of every partial (i.e. disjunctive) hypothesis, known as the fragmentation problem. We show, both theoretically and empirically, how the fragmentation problem adversely affects predictive accuracy as variation (a measure of concept difficulty) increases. Applying feature-construction techniques at every tree node, which we implement on a decision tree inducer DALI, is proved to only partially solve the fragmentation problem. Our study illustrates how a more robust solution must also assess the value of each partial hypothesis by recurring to all available training data, an approach we name global data analysis, which decision tree induction alone is unable to accomplish. The value of global data analysis is evaluated by comparing modified versions of C4.5 rules with C4.5 trees and DALI, on both artificial and real-world domains. Empirical results suggest the importance of combining both feature construction and global data analysis to solve the fragmentation problem. | An Information Measure For Classification | Training connectionist models for the structured language model We investigate the performance of the Structured Language Model (SLM) in terms of perplexity (PPL) when its components are modeled by connectionist models. The connectionist models use a distributed representation of the items in the history and make much better use of contexts than currently used interpolated or back-off models, not only because of the inherent capability of the connectionist model in fighting the data sparseness problem, but also because of the sublinear growth in the model size when the context length is increased. The connectionist models can be further trained by an EM procedure, similar to the previously used procedure for training the SLM. Our experiments show that the connectionist models can significantly improve the PPL over the interpolated and back-off models on the UPENN Treebank corpora, after interpolating with a baseline trigram language model. The EM training procedure can improve the connectionist models further, by using hidden events obtained by the SLM parser. | Online Convex Programming and Generalized Infinitesimal Gradient Ascent. Convex programming involves a convex set F Rn and a convex cost function c : F ! R. The goal of convex programming is to nd a point in F which minimizes c. In online convex programming, the convex set is known in advance, but in each step of some repeated optimization problem, one must select a point inF before seeing the cost function for that step. This can be used to model factory production, farm production, and many other industrial optimization prob- lems where one is unaware of the value of the items produced until they have already been constructed. We introduce an algorithm for this domain. We also apply this algorithm to repeated games, and show that it is re- ally a generalization of innitesimal gradient ascent, and the results here imply that gen- eralized innitesimal gradient ascent (GIGA) is universally consistent. | Is Learning The N-Th Thing Any Easier Than Learning The First? This paper investigates learning in a lifelong context. Lifelong learning addresses situations in which a learner faces a whole stream of learn-ing tasks. Such scenarios provide the opportunity to transfer knowledge across multiple learning tasks, in order to generalize more accurately from less training data. In this paper, several different approaches to lifelong learning are described, and applied in an object recognition domain. It is shown that across the board, lifelong learning approaches generalize consistently more accurately from less training data, by their ability to transfer knowledge across learning tasks. | Unsupervised Learning of Image Transformations We describe a probabilistic model for learning rich, dis- tributed representations of image transformations. The ba- sic model is defined as a gated conditional random field that is trained to predict transformations of its inputs using a factorial set of latent variables. Inference in the model con- sists in extracting the transformation, given a pair of im- ages, and can be performed exactly and efficiently. We show that, when trained on natural videos, the model develops domain specific motion features, in the form of fields of locally transformed edge filters. When trained on affine, or more general, transformations of still images, the model develops codes for these transformations, and can subsequently perform recognition tasks that are invari- ant under these transformations. It can also fantasize new transformations on previously unseen images. We describe several variations of the basic model and provide experi- mental results that demonstrate its applicability to a variety of tasks. | Exploring Strategies for Training Deep Neural Networks Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization often appears to get stuck in poor solutions. Hinton et al. recently proposed a greedy layer-wise unsupervised learning procedure relying on the training algorithm of restricted Boltzmann machines (RBM) to initialize the parameters of a deep belief network (DBN), a generative model with many layers of hidden causal variables. This was followed by the proposal of another greedy layer-wise procedure, relying on the usage of autoassociator networks. In the context of the above optimization problem, we study these algorithms empirically to better understand their success. Our experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy helps the optimization by initializing weights in a region near a good local minimum, but also implicitly acts as a sort of regularization that brings better generalization and encourages internal distributed representations that are high-level abstractions of the input. We also present a series of experiments aimed at evaluating the link between the performance of deep neural networks and practical aspects of their topology, for example, demonstrating cases where the addition of more depth helps. Finally, we empirically explore simple variants of these training algorithms, such as the use of different RBM input unit distributions, a simple way of combining gradient estimators to improve performance, as well as on-line versions of those algorithms. | Learning semantic representation with neural networks for community question answering retrieval •Learning the semantic representation using neural network architecture.•The neural network is trained via pre-training and fine-tuning phase.•The learned semantic level feature is incorporated into a LTR framework. | Weakly-Shared Deep Transfer Networks for Heterogeneous-Domain Knowledge Propagation In recent years, deep networks have been successfully applied to model image concepts and achieved competitive performance on many data sets. In spite of impressive performance, the conventional deep networks can be subjected to the decayed performance if we have insufficient training examples. This problem becomes extremely severe for deep networks with powerful representation structure, making them prone to over fitting by capturing nonessential or noisy information in a small data set. In this paper, to address this challenge, we will develop a novel deep network structure, capable of transferring labeling information across heterogeneous domains, especially from text domain to image domain. This weakly-shared Deep Transfer Networks (DTNs) can adequately mitigate the problem of insufficient image training data by bringing in rich labels from the text domain. Specifically, we present a novel architecture of DTNs to translate cross-domain information from text to image. To share the labels between two domains, we will build multiple weakly shared layers of features. It allows to represent both shared inter-domain features and domain-specific features, making this structure more flexible and powerful in capturing complex data of different domains jointly than the strongly shared layers. Experiments on real world dataset will show its competitive performance as compared with the other state-of-the-art methods. | Efficient top-down computation of queries under the well-founded semantics The well-founded model provides a natural and robust semantics for logic programs with negative literals in rule bodies. Although various procedural semantics have been proposed for query evaluation under the well-founded semantics, the practical issues of implementation for effective and efficient computation of queries have been rarely discussed. | Planning with sensing, concurrency, and exogenous events: logical framework and implementation The focus of current research in cognitive robotics is both on the realization of sys- tems based on known formal settings and on the extension of previous formal approaches to account for features that play a signifl- cant role for autonomous robots, but have not yet received an adequate treatment. In this paper we adopt a formal framework de- rived from Propositional Dynamic Logics by exploiting their formal correspondence with Description Logics, and present an extension of such a framework obtained by introducing both concurrency on primitive actions and autoepistemic operators for explicitly repre- senting the robot's epistemic state. We show that the resulting formal setting allows for the representation of actions with context- dependent efiects, sensing actions, and con- current actions, and address both the pres- ence of exogenous events and the characteri- zation of the notion of executable plan in such a complex setting. Moreover, we present an implementation of this framework in a system which is capable of generating plans that are actually executed on mobile robots, and illus- trate the experimentation of such a system in the design and implementation of soccer players for the 1999 Robocup competition. | Editorial introduction to the Neural Networks special issue on Deep Learning of Representations. | 1.041733 | 0.04033 | 0.04033 | 0.040014 | 0.040014 | 0.020132 | 0.010032 | 0.004481 | 0.000488 | 0.000004 | 0.000001 | 0 | 0 | 0 |
Proactive Data Migration for Improved Storage Availability in Large-Scale Data Centers In face of high partial and complete disk failure rates and untimely system crashes, the executions of low-priority background tasks become increasingly frequent in large-scale data centers. However, the existing algorithms are all reactive optimizations and only exploit the temporal locality of workloads to reduce the user I/O requests during the low-priority background tasks. To address the problem, this paper proposes IDO (Intelligent Data Outsourcing), a zone-based and proactive data migration optimization, to significantly improve the efficiency of the low-priority background tasks. The main idea of IDO is to proactively identify the hot data zones of RAID-structured storage systems in the normal operational state. By leveraging the prediction tools to identify the upcoming events, IDO proactively migrates the data blocks belonging to the hot data zones on the degraded device to a surrogate RAID set in the large-scale data centers. Upon a disk failure or crash reboot, most user I/O requests addressed to the degraded RAID set can be serviced directly by the surrogate RAID set rather than the much slower degraded RAID set. Consequently, the performance of the background tasks and user I/O performance during the background tasks are improved simultaneously. Our lightweight prototype implementation of IDO and extensive trace-driven experiments on two case studies demonstrate that, compared with the existing stateof- the-art approaches, IDO effectively improves the performance of the low-priority background tasks. Moreover, IDO is portable and can be easily incorporated into any existing algorithms for RAID-structured storage systems. | HPDA: A hybrid parity-based disk array for enhanced performance and reliability A single flash-based Solid State Drive (SSD) can not satisfy the capacity, performance and reliability requirements of a modern storage system supporting increasingly demanding data-intensive computing applications. Applying RAID schemes to SSDs to meet these requirements, while a logical and viable solution, faces many challenges. In this paper, we propose a Hybrid Parity-based Disk Array architecture, HPDA, which combines a group of SSDs and two hard disk drives (HDDs) to improve the performance and reliability of SSD-based storage systems. In HPDA, the SSDs (data disks) and part of one HDD (parity disk) compose a RAID4 disk array. Meanwhile, a second HDD and the free space of the parity disk are mirrored to form a RAID1-style write buffer that temporarily absorbs the small write requests and acts as a surrogate set during recovery when a disk fails. The write data is reclaimed back to the data disks during the lightly loaded or idle periods of the system. Reliability analysis shows that the reliability of HPDA, in terms of MTTDL (Mean Time To Data Loss), is better than that of either pure HDD-based or SSD-based disk array. Our prototype implementation of HPDA and performance evaluations show that HPDA significantly outperforms either HDD-based or SSD-based disk array. | The pitfalls of deploying solid-state drive RAIDs Solid-State Drives (SSDs) are about to radically change the way we look at storage systems. Without moving mechanical parts, they have the potential to supplement or even replace hard disks in performance-critical applications in the near future. Storage systems applied in such settings are usually built using RAIDs consisting of a bunch of individual drives for both performance and reliability reasons. Most existing work on SSDs, however, deals with the architecture at system level, the ash translation layer (FTL), and their influence on the overall performance of a single SSD device. Therefore, it is currently largely unclear whether RAIDs of SSDs exhibit different performance and reliability characteristics than those comprising hard disks and to which issues we have to pay special attention to ensure optimal operation in terms of performance and reliability. In this paper, we present a detailed analysis of SSD RAID configuration issues and derive several pitfalls for deploying SSDs in common RAID level configurations that can lead to severe performance degradation. After presenting potential solutions for each of these pitfalls, we concentrate on the particular challenge that SSDs can suffer from bad random write performance. We identify that over-provisioning offers a potential solution to this problem and validate the effectiveness of over-provisioning in common RAID level configurations by experiments whose results are compared to those of an analytical model that allows to approximately predict the random write performance of SSD RAIDs based on the characteristics of a single SSD. Our results show that over-provisioning is indeed an effective method that can increase random write performance in SSD RAIDs by more than an order of magnitude eliminating the potential Achilles heel of SSD-based storage systems. | A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle. | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days. | Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented. | Efficient sparse coding algorithms Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that cap- ture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimiza- tion problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field sur- round suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons. | Synchronized Disk Interleaving A group of disks may be interleaved to speed up data transfers in a manner analogous to the speedup achieved by main memory interleaving. Conventional disks may be used for interleaving by spreading data across disks and by treating multiple disks as if they were a single one. Furthermore, the rotation of the interleaved disks may be synchronized to simplify control and also to optimize performance. In addition, check- sums may be placed on separate check-sum disks in order to improve reliability. In this paper, we study synchronized disk interleaving as a high-performance mass storage system architecture. The advantages and limitations of the proposed disk interleaving scheme are analyzed using the M/G/1 queueing model and compared to the conventional disk access mechanism. | Downward Separation Fails Catastrophically for Limited Nondeterminism Classes The $\beta$ hierarchy consists of classes $\beta_k={\rm NP}[logkn]\subseteq {\rm NP}$. Unlike collapses in the polynomial hierarchy and the Boolean hierarchy, collapses in the $\beta$ hierarchy do not seem to translate up, nor does closure under complement seem to cause the hierarchy to collapse. For any consistent set of collapses and separations of levels of the hierarchy that respects ${\rm P} = \beta_1\subseteq \beta_2\subseteq \cdots \subseteq {\rm NP}$, we can construct an oracle relative to which those collapses and separations hold; at the same time we can make distinct levels of the hierarchy closed under computation or not, as we wish. To give two relatively tame examples: for any $k \geq 1$, we construct an oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} \neq \beta_{k+2} \neq \cdots \] and another oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} = {\rm PSPACE}. \] We also construct an oracle relative to which $\beta_{2k} = \beta_{2k+1} \neq \beta_{2k+2}$ for all k. | Normal forms for answer sets programming Normal forms for logic programs under stable/answer set semantics are introduced. We argue that these forms can simplify the study of program properties, mainly consistency. The first normal form, called the kernel of the program, is useful for studying existence and number of answer sets. A kernel program is composed of the atoms which are undefined in the Well-founded semantics, which are those that directly affect the existence of answer sets. The body of rules is composed of negative literals only. Thus, the kernel form tends to be significantly more compact than other formulations. Also, it is possible to check consistency of kernel programs in terms of colorings of the Extended Dependency Graph program representation which we previously developed. The second normal form is called 3-kernel. A 3-kernel program is composed of the atoms which are undefined in the Well-founded semantics. Rules in 3-kernel programs have at most two conditions, and each rule either belongs to a cycle, or defines a connection between cycles. 3-kernel programs may have positive conditions. The 3-kernel normal form is very useful for the static analysis of program consistency, i.e. the syntactic characterization of existence of answer sets. This result can be obtained thanks to a novel graph-like representation of programs, called Cycle Graph which presented in the companion article Costantini (2004b). | A cost-benefit scheme for high performance predictive prefetching | When Multivariate Forecasting Meets Unsupervised Feature Learning - Towards a Novel Anomaly Detection Framework for Decision Support. Many organizations adopt information technologies to make intelligent decisions during operations. Time-series data plays a crucial role in supporting such decision making processes. Though current studies on time-series based decision making provide reasonably well results, the anomaly detection essence underling most of the scenarios and the plenitude of unlabeled data are largely overlooked and left unexplored. We argue that by using multivariate forecasting and unsupervised feature learning, these two important research gaps could be filled. We carried out two experiments in this study to testify our approach and the results showed that decision support performance was significantly improved. We also proposed a novel framework to integrate the two methods so that our approach may be generalized to a larger problem domain. We discussed the advantages, the limitations and the future work of our study. Both practical and theoretical contributions were also discussed in the paper. © 2012 by the AIS/ICIS Administrative Office All rights reserved. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1.2 | 0.1 | 0.025 | 0.000219 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Adaptive Learning Rule for Hardware-based Deep Neural Networks Using Electronic Synapse Devices. In this paper, we propose a learning rule based on a back-propagation (BP) algorithm that can be applied to a hardware-based deep neural network using electronic devices that exhibit discrete and limited conductance characteristics. This adaptive learning rule, which enables forward, backward propagation, as well as weight updates in hardware, is helpful during the implementation of power-efficient and high-speed deep neural networks. In simulations using a three-layer perceptron network, we evaluate the learning performance according to various conductance responses of electronic synapse devices and weight-updating methods. It is shown that the learning accuracy is comparable to that obtained when using a software-based BP algorithm when the electronic synapse device has a linear conductance response with a high dynamic range. Furthermore, the proposed unidirectional weight-updating method is suitable for electronic synapse devices which have nonlinear and finite conductance responses. Because this weight-updating method can compensate the demerit of asymmetric weight updates, we can obtain better accuracy compared to other methods. This adaptive learning rule, which can be applied to full hardware implementation, can also compensate the degradation of learning accuracy due to the probable device-to-device variation in an actual electronic synapse device. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Adaptive Convolutional ELM For Concept Drift Handling in Online Stream Data. In big data era, the data continuously generated and its distribution may keep changes overtime. These challenges in online stream of data are known as concept drift. In this paper, we proposed the Adaptive Convolutional ELM method (ACNNELM) as enhancement of Convolutional Neural Network (CNN) with a hybrid Extreme Learning Machine (ELM) model plus adaptive capability. This method is aimed for concept drift handling. enhanced the CNN as convolutional hiererchical features representation learner combined with Elastic ELM (E$^2$LM) as a parallel supervised classifier. propose an Adaptive OS-ELM (AOS-ELM) for concept drift adaptability in classifier level (named ACNNELM-1) and matrices concatenation ensembles for concept drift adaptability in ensemble level (named ACNNELM-2). Our proposed Adaptive CNNELM is flexible that works well in classifier level and ensemble level while most current methods only proposed to work on either one of the levels. We verified our method in extended MNIST data set and not MNIST data set. set the experiment to simulate virtual drift, real drift, and hybrid drift event and we demonstrated how our CNNELM adaptability works. Our proposed method works well and gives better accuracy, computation scalability, and concept drifts adaptability compared to the regular ELM and CNN. Further researches are still required to study the optimum parameters and to use more varied image data set. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
From Close To Distant And Back: How To Read With The Help Of Machines In recent years a common trend characterised by the adoption of text mining methods for the study of digital sources emerged in digital humanities, often in opposition to traditional hermeneutic approaches. In our paper, we intend to show how text mining methods will always need a strong support from the humanist. On the one hand we remark how humanities research involving computational techniques should be thought of as a three steps process: from close reading (identification of a specific case study, initial feature selection) to distant reading (text mining analysis) to close reading again (evaluation of the results, interpretation, use of the results). Moreover, we highlight how failing to understand the importance of all the three steps is a major cause for the mistrust in text mining techniques developed around the humanities. On the other hand we observe that text mining techniques could be a very promising tool for the humanities and that researchers should not renounce to such approaches, but should instead experiment with advanced methods such as the ones belonging to the family of deep learning. In this sense we remark that, especially in the field of digital humanities, exploiting complementarity between computational methods and humans will be the most advantageous research direction. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
DualFS: a new journaling file system without meta-data duplication In this paper we introduce DualFS, a new high performance journaling file system that puts data and meta-data on different devices (usually, two partitions on the same disk or on different disks), and manages them in very different ways. Unlike other journaling file systems, DualFS has only one copy of every meta-data block. This copy is in the meta-data device, a log which is used by DualFS both to read and to write meta-data blocks. By avoiding a time-expensive extra copy of meta-data blocks, DualFS can achieve a good performance as compared to other journaling file systems. Indeed, we have implemented a DualFS prototype, which has been evaluated with microbenchmarks and macrobenchmarks, and we have found that DualFS greatly reduces the total I/O time taken by the file system in most cases (up to 97%), whereas it slightly increases the total I/O time only in a few and limited cases. | A high performance multi-structured file system design File system I/O is increasingly becoming a performance bottleneck in large distributed computer systems. This is due to the increased file I/O demands of new applications, the inability of any single storage structure to respond to these demands, and the slow decline of, disk access times (latency and seek) relative to the rapid increase in CPU speeds, memory size, and network bandwidth.We present a multi-structured file system designed for high bandwidth I/O and fast response. Our design is based on combining disk caching with three different file storage structures, each implemented on an independent and isolated disk array. Each storage structure is designed to optimize a different set of file system access characteristics such as cache writes, directory searches, file attribute requests or large sequential reads/writes.As part of our study, we analyze the performance of an existing file system using trace data from UNIX disk I/O-intensive workloads. Using trace driven simulations, we show how performance is improved by using separate storage structures as implemented by a multi-structured file system. | Context-aware prefetching at the storage server In many of today's applications, access to storage constitutes the major cost of processing a user request. Data prefetching has been used to alleviate the storage access latency. Under current prefetching techniques, the storage system prefetches a batch of blocks upon detecting an access pattern. However, the high level of concurrency in today's applications typically leads to interleaved block accesses, which makes detecting an access pattern a very challenging problem. Towards this, we propose and evaluate QuickMine, a novel, lightweight and minimally intrusive method for contextaware prefetching. Under QuickMine, we capture application contexts, such as a transaction or query, and leverage them for context-aware prediction and improved prefetching effectiveness in the storage cache. We implement a prototype of our context-aware prefetching algorithm in a storage-area network (SAN) built using Network Block Device (NBD). Our prototype shows that context-aware prefetching clearly out-performs existing context-oblivious prefetching algorithms, resulting in factors of up to 2 improvements in application latency for two e-commerce workloads with repeatable access patterns, TPC-W and RUBiS. | The Case for Efficient File Access Pattern Modeling Most modern I/O systems treat each file access independently. However, events in a computer system are driven by programs. Thus, accesses to files occur in consistent patterns and are by no means independent. The result is that modern I/O systems ignore useful information. Using traces of file system activity we show that file accesses are strongly correlated with preceding accesses. In fact, a simple last-successor model (one that predicts each file access will be followed by the same file that followed the last time it was accessed) successfully predicted the next file 72% of the time. We examine the ability of two previously proposed models for file access prediction in comparison to this baseline model and see a stark contrast in accuracy and high overheads in state space. We then enhance one of these models to address the issues of model space requirements. This new model is able to improve an additional 10% on the accuracy of the last-successor model, while working within a state space that is within a constant factor (relative to the number of files) of the last-successor model. While this work was motivated by the use of file relationships for I/O prefetching, information regarding the likelihood of file access patterns has several other uses such as disk layout and file clustering for disconnected operation. | A fast file system for UNIX | The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz. | Affinity analysis of coded data sets Coded data sets are commonly used as compact representations of real world processes. Such data sets have been studied within various research fields from association mining, data warehousing, knowledge discovery, collaborative filtering to machine learning. However, previous studies on coded data sets have introduced methods for the analysis of rather small data sets. This study proposes applying information retrieval for enabling high performance analysis of data masses that scale beyond traditional approaches. Part of this PHD study focuses on new type of kernel projection functions that can be used to find similarities in spare discrete data spaces. This study presents experimental results how information retrieval indexes scale and outperform two common relational data schemas with a leading commercial DBMS for market basket analysis. | Practical Issues in Temporal Difference Learning This paper examines whether temporal difference methods for training connectionist networks, such as Sutton's TD(λ) algorithm, can be successfully applied to complex real-world problems. A number of important practical issues are identified and discussed from a general theoretical perspective. These practical issues are then examined in the context of a case study in which TD(λ) is applied to learning the game of backgammon from the outcome of self-play. This is apparently the first application of this algorithm to a complex non-trivial task. It is found that, with zero knowledge built in, the network is able to learn from scratch to play the entire game at a fairly strong intermediate level of performance, which is clearly better than conventional commercial programs, and which in fact surpasses comparable networks trained on a massive human expert data set. This indicates that TD learning may work better in practice than one would expect based on current theory, and it suggests that further analysis of TD methods, as well as applications in other complex domains, may be worth investigating. | The complexity of combinatorial problems with succinct input representation Several languages for the succinct representation of the instances of combinatorial problems are investigated. These languages have been introduced in [20, 2] and [5] where it has been shown that describing the instances by these languages causes a blow-up of the complexities of some problems. In the present paper the descriptional power of these languages is compared by estimating the complexities of some combinatorial problems in terms of completeness in suitable classes of the “counting polynomial-time hierarchy” which is introduced here. It turns out that some of the languages are not comparable, unless P=NP Some problems left open in [2] are solved. | Planning as search: a quantitative approach We present the thesis that planning can be viewed as problem-solving search using subgoals, macro-operators, and abstraction as knowledge sources. Our goal is to quantify problem-solving performance using these sources of knowledge. New results include the identification of subgoal distance as a fundamental measure of problem difficulty, a multiplicative time-space tradeoff for macro-operators, and an analysis of abstraction which concludes that abstraction hierarchies can reduce exponential problems to linear complexity. | Simultaneous Pipelining in QPipe: Exploiting Work Sharing Opportunities Across Queries Data warehousing and scientific database applications operate on massive datasets and are characterized by complex queries accessing large portions of the database. Concurrent queries often exhibit high data and computation overlap, e.g., they access the same relations on disk, compute similar aggregates, or share intermediate results. Unfortunately, run-time sharing in modern database engines is limited by the paradigm of invoking an independent set of operator instances per query, potentially missing sharing opportunities if the buffer pool evicts data early. | The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms A fundamental challenge in improving file system performance is to design effective block replacement algorithms to minimize buffer cache misses. Despite the well-known interactions between prefetching and caching, almost all buffer cache replacement algorithms have been proposed and studied comparatively, without taking into account file system prefetching, which exists in all modern operating systems. This paper shows that such kernel prefetching can have a significant impact on the relative performance in terms of the number of actual disk I/Os of many well-known replacement algorithms; it can not only narrow the performance gap but also change the relative performance benefits of different algorithms. Moreover, since prefetching can increase the number of blocks clustered for each disk I/O and, hence, the time to complete the I/O, the reduction in the number of disk I/Os may not translate into proportional reduction in the total I/O time. These results demonstrate the importance of buffer caching research taking file system prefetching into consideration and comparing the actual disk I/Os and the execution time under different replacement algorithms. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.2 | 0.05 | 0.008333 | 0.006897 | 0.001695 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Reasoning about Concurrent Actions and Observations In this paper we present an experiment by using abductive logic programmingto reason about concurrent actions and observations. Technically, we extendGelfond and Lifschitz" action description language A with concurrent actionsand observation propositions to describe the ideal behaviour of domains of (concurrent)actions and practically observed behaviour, respectively, without requiring thatthe practically observed behaviour of a domain of actions be consistent with its idealbehaviour. We... | Explicit and implicit indeterminism reasoning about uncertain and contradictory specifications of dynamic systems A high-level action semantics for specifying and reasoning about dynamic systems is presented which supports both uncertain knowledge (taken as explicit indeterminism) and contradictory information (taken as implicit indeterminism). We start by developing an action description language for intentionally representing nondeterministic actions in dynamic systems. We then study the different possibilities of interpreting contradictory specifications of concurrent actions. We argue that the most reasonable interpretation which allows for exploiting as much information as possible, is to take such conflicts as implicit indeterminism. As the second major contribution, we present a calculus for our resulting action semantics based on the logic programming paradigm including negation-as-failure and equational theories. Soundness and completeness of this encoding wrt. the notion of entailment in our action language is proved by taking the completion semantics for equational logic programs with negation. | Computing change and specificity with equational logic programs Recent deductive approaches to reasoning about action and chance allow us to model objects and methods in a deductive framework. In these approaches, inheritance of methods comes for free, whereas overriding of methods is unsupported. In this paper, we present an equational logic framework for objects, methods, inheritance and overriding of methods. Overriding is achieved via the concept of specificity, which states that more specific methods are preferred to less specific ones. Specificity is computed with the help of negation as failure. We specify equational logic programs and show that their completed versions behave as intended. Furthermore, we prove that SLDENF-resolution is complete if the equational theory is finitary, the completed programs are consistent and no derivation flounders or is infinite. Moreover, we give syntactic conditions which guarantee that no derivation flounders or is infinite. Finally, we discuss how the approach can be extended to reasoning about the past in the context of incompletely specified objects or situations. It will turn out that constructive negation is needed to solve these problems. | Representing Concurrent Actions and Solving Conflicts As an extension of the well{known Action Description lan- guage A introduced by M. Gelfond and V. Lifschitz (7), C. Baral and M. Gelfond recently deflned the dialect AC which allows the descrip- tion of concurrent actions (1). Also, a sound but incomplete encoding of AC by means of an extended logic program was presented there. In this paper, we work on interpretations of contradictory inferences from par- tial action descriptions. Employing an interpretation difierent from the one implicitly used in AC , we present a new dialect A + C , which allows to infer non-contradictory information from contradictory descriptions and to describe nondeterminism and uncertainty. Furthermore, we give the flrst sound and complete encoding of AC , using equational logic programming, and extend it to A+C as well. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Empirical Analysis of Predictive Algorithms for Collaborative Filtering Collaborative filtering or recommender systemsuse a database about user preferences topredict additional topics or products a newuser might like. In this paper we describe several algorithms designed for this task, including techniques based on correlation coefficients,vector-based similarity calculations,and statistical Bayesian methods. We comparethe predictive accuracy of the various methods in a set of representative problemdomains. We use two basic classes of evaluation... | Predicting individual disease risk based on medical history The monumental cost of health care, especially for chronic disease treatment, is quickly becoming unmanageable. This crisis has motivated the drive towards preventative medicine, where the primary concern is recognizing disease risk and taking action at the earliest signs. However, universal testing is neither time nor cost efficient. We propose CARE, a Collaborative Assessment and Recommendation Engine, which relies only on a patient's medical history using ICD-9-CM codes in order to predict future diseases risks. CARE uses collaborative filtering to predict each patient's greatest disease risks based on their own medical history and that of similar patients. We also describe an Iterative version, ICARE, which incorporates ensemble concepts for improved performance. These novel systems require no specialized information and provide predictions for medical conditions of all kinds in a single run. We present experimental results on a Medicare dataset, demonstrating that CARE and ICARE perform well at capturing future disease risks. | Real-time multimedia systems The expansion of multimedia networks and systems depends on real-time support for media streams and interactive multimedia services. Multimedia data are essentially continuous, heterogeneous, and isochronous, three characteristics with strong real-time implications when combined. At the same time, some multimedia services, like video-on-demand or distributed simulation, are real-time applications with sophisticated temporal functionalities in their user interface. We analyze the main problems in building such real-time multimedia systems, and we discuss-under an architectural prospect-some technological solutions especially those regarding determinism and efficient synchronization in the storage, processing, and communication of audio and video data | NP is as easy as detecting unique solutions For all known NP-complete problems the number of solutions in instances having solutions may vary over an exponentially large range. Furthermore, most of the well-known ones, such as satisfiability, are parsimoniously interreducible, and these can have any number of solutions between zero and an exponentially large number. It is natural to ask whether the inherent intractability of NP-complete problems is caused by this wide variation. In this paper we give a negative answer to this using randomized reductions. We show that the problems of distinguishing between instances of SAT having zero or one solution, or finding solutions to instances of SAT having unique solutions, are as hard as SAT itself. Several corollaries about the difficulty of specific problems follow. For example if the parity of the number of solutions of SAT can be computed in RP then NP = RP. Some further problems can be shown to be hard for NP or DP via randomized reductions. | A Stable Distributed Scheduling Algorithm | Normal forms for answer sets programming Normal forms for logic programs under stable/answer set semantics are introduced. We argue that these forms can simplify the study of program properties, mainly consistency. The first normal form, called the kernel of the program, is useful for studying existence and number of answer sets. A kernel program is composed of the atoms which are undefined in the Well-founded semantics, which are those that directly affect the existence of answer sets. The body of rules is composed of negative literals only. Thus, the kernel form tends to be significantly more compact than other formulations. Also, it is possible to check consistency of kernel programs in terms of colorings of the Extended Dependency Graph program representation which we previously developed. The second normal form is called 3-kernel. A 3-kernel program is composed of the atoms which are undefined in the Well-founded semantics. Rules in 3-kernel programs have at most two conditions, and each rule either belongs to a cycle, or defines a connection between cycles. 3-kernel programs may have positive conditions. The 3-kernel normal form is very useful for the static analysis of program consistency, i.e. the syntactic characterization of existence of answer sets. This result can be obtained thanks to a novel graph-like representation of programs, called Cycle Graph which presented in the companion article Costantini (2004b). | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.2 | 0.033333 | 0.014286 | 0.0125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Kolmogorov complexity and degrees of tally sets We show that either E p m ( TALLY )=E p btt ( TALLY ) or E p m ( TALLY )⊂E p 1−tt ( TALLY )⊂E p 2−tt ( TALLY )⊂E p 3−tt ( TALLY )… , where E r p (TALLY) denotes the class of sets which are equivalent to a tally set under ≤ r p reductions. Furthermore, the question of whether or not E m p (TALLY) = E btt p (TALLY) is equivalent to the question of whether or not NE predicates can be solved in deterministic exponential time. The proofs use the techniques of generalized Kolmogorov complexity. As corollaries to some of the main results, we obtain new results about the Kolmogorov complexity of sets in P . | Lower bounds for constant depth circuits in the presence of help bits The problem of how many extra bits of `help' a constant depth circuit needs in order to compute m functions is considered. Each help bit can be an arbitrary Boolean function. An exponential lower bound on the size of the circuit computing m parity functions in the presence of m-1 help bits is proved. The proof is carried out using the algebraic machinery of A. Razborov (1987) and R. Smolensky (1987). A by-product of the proof is that the same bound holds for circuits with modp gates for a fixed prime p>2. The lower bound implies a random oracle separation for PH and PSPACE, which is optimal in a technical sense | The complexity of facets (and some facets of complexity) Many important combinatorial optimization problems, including the traveling salesman problem (TSP), the clique problem and many others, call for the optimization of a linear functional over some discrete set of vectors. | Counting, Selecting, adn Sorting by Query-Bounded Machines We study the query-complexity of counting, selecting, and sorting functions. That is, for a given set A and a positive integer k, we ask, how many queries to an arbitrary oracle does a polynomial-time machine on input (x
1, x
2,..., x
k
) need to determine how many strings of the input are in A. We also ask how many queries are necessary to select a string in A from the input (x
1, x
2,..., x
k
) if such a string exists and to sort the input (x
1, x
2,..., x
k
) with respect to the ordering x y if and only if x A y A. We obtain optimal query-bounds for these problems, and show that sets for which these functions have a low query-complexity must be easy in some sense. For such sets we obtain optimal placements in the extended low hierarchy. We also show that in the case of NP-complete sets the lower bounds for counting and selecting hold unless P=NP. Finally, we relate these notions to cheatability and p-superterseness. Our results yield as corollaries extensions of previously know results. | Some connections between bounded query classes and non-uniform complexity It is shown that if there is a polynomial-time algorithm that tests k(n)=O(log n) points for membership in a set A by making only k(n)-1 adaptive queries to an oracle set X, then A belongs to NP/poly intersection co-NP/poly (if k(n)=O(1) then A belong to P/poly). In particular, k(n)=O(log n) queries to an NP -complete set (k(n)=O(1) queries to an NP-hard set) are more powerful than k(n)-1 queries, unless the polynomial hierarchy collapses. Similarly, if there is a small circuit that tests k(n) points for membership in A by making only k(n)-1 adaptive queries to a set X, then there is a correspondingly small circuit that decides membership in A without an oracle. An investigation is conducted of the quantitatively stronger assumption that there is a polynomial-time algorithm that tests 2k strings for membership in A by making only k queries to an oracle X, and qualitatively stronger conclusions about the structure of A are derived: A cannot be self-reducible unless A∈P, and A cannot be NP-hard unless P=NP. Similar results hold for counting classes. In addition, relationships between bounded-query computations, lowness, and the p-degrees are investigated | Facets of the knapsack polytope Abstract A necessary and sufficient condition is given for an inequality with coefficients 0 or 1 to define a facet of the knapsack polytope, i.e., of the convex hull of 0–1 points satisfying a given linear inequality. A sufficient condition is also established for a larger class of inequalities (with coefficients not restricted to 0 and 1) to define a facet for the same polytope, and a procedure is given for generating all facets in the above two classes. The procedure can be viewed as a way of generating cutting planes for 0–1 programs. | The complexity of evaluating relational queries We prove a sequence of results which characterize exactly the complexity of problems related to the evaluation of relational queries consisting of projections and natural joins. We show that testing whether the result of a given query on a given relation equals some other given relation is Dp complete (Dp is a class which includes both NP and co-NP, and was recently introduced in a totally different context [13]). We show that testing inclusion or equivalence of queries with respect to a fixed relation (or of relations with respect to a fixed query) is Π2p-complete. We also examine the complexity of estimating the number of tuples of the answer. | The contract net protocol: high-level communication and control in a distributed problem solver The contract net protocol has been developed to specify problem-solving communication and control for nodes in a distributed problem solver. Task distribution is affected by a negotiation process, a discussion carried on between nodes with tasks to be executed and nodes that may be able to execute those tasks. | Wikipedia risks | Logic programs with exceptions We extend logic programming to deal with default reasoning by allowing the explicit representation of exceptions in addition
to general rules. To formalise this extension, we modify the answer set semantics of Gelfond and Lifschitz, which allows both
classical negation and negation as failure.
We also propose a transformation which eliminates exceptions by using negation by failure. The transformed program can be
implemented by standard logic programming methods, such as SLDNF. The explicit representation of rules and exceptions has
the virtue of greater naturalness of expression. The transformed program, however, is easier to implement. | Computing change and specificity with equational logic programs Recent deductive approaches to reasoning about action and chance allow us to model objects and methods in a deductive framework. In these approaches, inheritance of methods comes for free, whereas overriding of methods is unsupported. In this paper, we present an equational logic framework for objects, methods, inheritance and overriding of methods. Overriding is achieved via the concept of specificity, which states that more specific methods are preferred to less specific ones. Specificity is computed with the help of negation as failure. We specify equational logic programs and show that their completed versions behave as intended. Furthermore, we prove that SLDENF-resolution is complete if the equational theory is finitary, the completed programs are consistent and no derivation flounders or is infinite. Moreover, we give syntactic conditions which guarantee that no derivation flounders or is infinite. Finally, we discuss how the approach can be extended to reasoning about the past in the context of incompletely specified objects or situations. It will turn out that constructive negation is needed to solve these problems. | A comparison of FFS disk allocation policies The 4.4BSD file system includes a new algorithm for allocating disk blocks to files. The goal of this algorithm is to improve file clustering, increasing the amount of sequential I/O when reading or writing files, thereby improving file system performance. In this paper we study the effectiveness of this algorithm at reducing file system fragmentation. We have created a program that artificially ages a file system by replaying a workload similar to that experienced by a real file system. We used this program to evaluate the effectiveness of the new disk allocation algorithm by replaying ten months of activity on two file systems that differed only in the disk allocation algorithms that they used. At the end of the ten month simulation, the file system using the new allocation algorithm had approximately half the fragmentation of a similarly aged file system that used the traditional disk allocation algorithm. Measuring the performance difference between the two file systems by reading and writing the same set of files on the two systems showed that this decrease in fragmentation improved file write throughput by 20% and read throughput by 32%. In certain test cases, the new allocation algorithm provided a performance improvement of greater than 50%. | Representing the process semantics in the situation calculus This paper presents a formal method based on the high-level semantics of processes to reason about continuous change. With a case study we show how the semantics of processes can be integrated with the situation calculus. The soundness and completeness of situation calculus with respect to the process semantics are proven. Furthermore, the logical programming is implemented to support the semantics of processes with the situation calculus. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.248 | 0.006857 | 0.001785 | 0.001714 | 0.00049 | 0.000004 | 0.000001 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Improving the Performance of Cluster Applications through I/O Proxy Architecture Clusters are the most common solutions for high performance computing at the present time. In this kind of systems, an important challenge is the I/O subsystem design. Typically, these environments are not flexible enough and the only way to solve performance bottlenecks is adding new hardware. In this paper, we show how an I/O proxy-based architecture can improve the I/O performance of cluster applications in three ways: adapting to the application requirements, reducing the load on the I/O nodes, and finally, increasing the global performance of the storage system | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Extreme Learning Machine for Multilayer Perceptron. Extreme learning machine (ELM) is an emerging learning algorithm for the generalized single hidden layer feedforward neural networks, of which the hidden node parameters are randomly generated and the output weights are analytically computed. However, due to its shallow architecture, feature learning using ELM may not be effective for natural signals (e.g., images/videos), even with a large number of hidden nodes. To address this issue, in this paper, a new ELM-based hierarchical learning framework is proposed for multilayer perceptron. The proposed architecture is divided into two main components: 1) self-taught feature extraction followed by supervised feature classification and 2) they are bridged by random initialized hidden weights. The novelties of this paper are as follows: 1) unsupervised multilayer encoding is conducted for feature extraction, and an ELM-based sparse autoencoder is developed via ℓ₁ constraint. By doing so, it achieves more compact and meaningful feature representations than the original ELM; 2) by exploiting the advantages of ELM random feature mapping, the hierarchically encoded outputs are randomly projected before final decision making, which leads to a better generalization with faster learning speed; and 3) unlike the greedy layerwise training of deep learning (DL), the hidden layers of the proposed framework are trained in a forward manner. Once the previous layer is established, the weights of the current layer are fixed without fine-tuning. Therefore, it has much better learning efficiency than the DL. Extensive experiments on various widely used classification data sets show that the proposed algorithm achieves better and faster convergence than the existing state-of-the-art hierarchical learning methods. Furthermore, multiple applications in computer vision further confirm the generality and capability of the proposed learning scheme. | Dimension Reduction With Extreme Learning Machine. Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of... | Extreme learning machines: new trends and applications. Extreme learning machine (ELM), as a new learning framework, draws increasing attractions in the areas of large-scale computing, high-speed signal processing, artificial intelligence, and so on. ELM aims to break the barriers between the conventional artificial learning techniques and biological learning mechanism and represents a suite of machine learning techniques in which hidden neurons need not to be tuned. ELM theories and algorithms argue that “random hidden neurons” capture the essence of some brain learning mechanisms as well as the intuitive sense that the efficiency of brain learning need not rely on computing power of neurons. Thus, compared with traditional neural networks and support vector machine, ELM offers significant advantages such as fast learning speed, ease of implementation, and minimal human intervention. Due to its remarkable generalization performance and implementation efficiency, ELM has been applied in various applications. In this paper, we first provide an overview of newly derived ELM theories and approaches. On the other hand, with the ongoing development of multilayer feature representation, some new trends on ELM-based hierarchical learning are discussed. Moreover, we also present several interesting ELM applications to showcase the practical advances on this subject. | Local Receptive Fields Based Extreme Learning Machine Extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks (SLFNs), provides efficient unified learning solutions for the applications of feature learning, clustering, regression and classification. Different from the common understanding and tenet that hidden neurons of neural networks need to be iteratively adjusted during training stage, ELM theories show that hidden neurons are important but need not be iteratively tuned. In fact, all the parameters of hidden nodes can be independent of training samples and randomly generated according to any continuous probability distribution. And the obtained ELM networks satisfy universal approximation and classification capability. The fully connected ELM architecture has been extensively studied. However, ELM with local connections has not attracted much research attention yet. This paper studies the general architecture of locally connected ELM, showing that: 1) ELM theories are naturally valid for local connections, thus introducing local receptive fields to the input layer; 2) each hidden node in ELM can be a combination of several hidden nodes (a subnetwork), which is also consistent with ELM theories. ELM theories may shed a light on the research of different local receptive fields including true biological receptive fields of which the exact shapes and formula may be unknown to human beings. As a specific example of such general architectures, random convolutional nodes and a pooling structure are implemented in this paper. Experimental results on the NORB dataset, a benchmark for object recognition, show that compared with conventional deep learning solutions, the proposed local receptive fields based ELM (ELM-LRF) reduces the error rate from 6.5% to 2.7% and increases the learning speed up to 200 times. | Kernel-Based Multilayer Extreme Learning Machines for Representation Learning. Recently, multilayer extreme learning machine (ML-ELM) was applied to stacked autoencoder (SAE) for representation learning. In contrast to traditional SAE, the training time of ML-ELM is significantly reduced from hours to seconds with high accuracy. However, ML-ELM suffers from several drawbacks: 1) manual tuning on the number of hidden nodes in every layer is an uncertain factor to training tim... | Learning deep representations via extreme learning machines. Extreme learning machine (ELM) as an emerging technology has achieved exceptional performance in large-scale settings, and is well suited to binary and multi-class classification, as well as regression tasks. However, existing ELM and its variants predominantly employ single hidden layer feedforward networks, leaving the popular and potentially powerful stacked generalization principle unexploited for seeking predictive deep representations of input data. Deep architectures can find higher-level representations, thus can potentially capture relevant higher-level abstractions. But most of current deep learning methods require solving a difficult and non-convex optimization problem. In this paper, we propose a stacked model, DrELM, to learn deep representations via extreme learning machine according to stacked generalization philosophy. The proposed model utilizes ELM as a base building block and incorporates random shift and kernelization as stacking elements. Specifically, in each layer, DrELM integrates a random projection of the predictions obtained by ELM into the original feature, and then applies kernel functions to generate the resultant feature. To verify the classification and regression performance of DrELM, we conduct the experiments on both synthetic and real-world data sets. The experimental results show that DrELM outperforms ELM and kernel ELMs, which appear to demonstrate that DrELM could yield predictive features that are suitable for prediction tasks. The performances of the deep models (i.e. Stacked Auto-encoder) are comparable. However, due to the utilization of ELM, DrELM is easier to learn and faster in testing. | Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev... | Nonlinear autoassociation is not equivalent to PCA. A common misperception within the neural network community is that even with nonlinearities in their hidden layer, autoassociators trained with backpropagation are equivalent to linear methods such as principal component analysis (PCA). Our purpose is to demonstrate that nonlinear autoassociators actually behave differently from linear methods and that they can outperform these methods when used for latent extraction, projection, and classification. While linear autoassociators emulate PCA, and thus exhibit a flat or unimodal reconstruction error surface, autoassociators with nonlinearities in their hidden layer learn domains by building error reconstruction surfaces that, depending on the task, contain multiple local valleys. This interpolation bias allows nonlinear autoassociators to represent appropriate classifications of nonlinear multimodal domains, in contrast to linear autoassociators, which are inappropriate for such tasks. In fact, autoassociators with hidden unit nonlinearities can be shown to perform nonlinear classification and nonlinear recognition. | Non-Local Manifold Tangent Learning We claim and present arguments to the effect that a large class of man- ifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation sug- gests to explore non-local manifold learning algorithms which attempt to discover shared structure in the tangent planes at different positions. A criterion for such an algorithm is proposed and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to general- ize very far from training data (on learning handwritten character image rotations), where a local non-parametric method fails. | Real-time multimedia systems The expansion of multimedia networks and systems depends on real-time support for media streams and interactive multimedia services. Multimedia data are essentially continuous, heterogeneous, and isochronous, three characteristics with strong real-time implications when combined. At the same time, some multimedia services, like video-on-demand or distributed simulation, are real-time applications with sophisticated temporal functionalities in their user interface. We analyze the main problems in building such real-time multimedia systems, and we discuss-under an architectural prospect-some technological solutions especially those regarding determinism and efficient synchronization in the storage, processing, and communication of audio and video data | SafetyNet: improving the availability of shared memory multiprocessors with global checkpoint/recovery We develop an availability solution, called SafetyNet, that uses a unified, lightweight checkpoint/recovery mechanism to support multiple long-latency fault detection schemes. At an abstract level, SafetyNet logically maintains multiple, globally consistent checkpoints of the state of a shared memory multiprocessor (i.e., processors, memory, and coherence permissions), and it recovers to a pre-fault checkpoint of the system and re-executes if a fault is detected. SafetyNet efficiently coordinates checkpoints across the system in logical time and uses "logically atomic" coherence transactions to free checkpoints of transient coherence state. SafetyNet minimizes performance overhead by pipelining checkpoint validation with subsequent parallel execution.We illustrate SafetyNet avoiding system crashes due to either dropped coherence messages or the loss of an interconnection network switch (and its buffered messages). Using full-system simulation of a 16-way multiprocessor running commercial workloads, we find that SafetyNet (a) adds statistically insignificant runtime overhead in the common-case of fault-free execution, and (b) avoids a crash when tolerated faults occur. | Complexity of Data Tree Patterns over XML Documents We consider Boolean combinations of data tree patterns as a specification and query language for XML documents. Data tree patterns are tree patterns plus variable (in)equalities which express joins between attribute values. Data tree patterns are a simple and natural formalism for expressing properties of XML documents. We consider first the model checking problem (query evaluation), we show that it is DP-complete in general and already NP-complete when we consider a single pattern. We then consider the satisfiability problem in the presence of a DTD. We show that it is in general undecidable and we identify several decidable fragments. | Building extensible frameworks for data processing: The case of MDP, Modular toolkit for Data Processing. Data processing is a ubiquitous task in scientific research, and much energy is spent on the development of appropriate algorithms. It is thus relatively easy to find software implementations of the most common methods. On the other hand, when building concrete applications, developers are often confronted with several additional chores that need to be carried out beside the individual processing steps. These include for example training and executing a sequence of several algorithms, writing code that can be executed in parallel on several processors, or producing a visual description of the application. The Modular toolkit for Data Processing (MDP) is an open source Python library that provides an implementation of several widespread algorithms and offers a unified framework to combine them to build more complex data processing architectures. In this paper we concentrate on some of the newer features of MOP, focusing on the choices made to automatize repetitive tasks for users and developers. In particular, we describe the support for parallel computing and how this is implemented via a flexible extension mechanism. We also briefly discuss the support for algorithms that require bi-directional data flow. (C) 2011 Elsevier B.V. All rights reserved. | Scalable virtual machine deployment using VM image caches In IaaS clouds, VM startup times are frequently perceived as slow, negatively impacting both dynamic scaling of web applications and the startup of high-performance computing applications consisting of many VM nodes. A significant part of the startup time is due to the large transfers of VM image content from a storage node to the actual compute nodes, even when copy-on-write schemes are used. We have observed that only a tiny part of the VM image is needed for the VM to be able to start up. Based on this observation, we propose using small caches for VM images to overcome the VM startup bottlenecks. We have implemented such caches as an extension to KVM/QEMU. Our evaluation with up to 64 VMs shows that using our caches reduces the time needed for simultaneous VM startups to the one of a single VM. | 1.012973 | 0.013125 | 0.0125 | 0.010016 | 0.00625 | 0.0025 | 0.000163 | 0.000009 | 0.000001 | 0 | 0 | 0 | 0 | 0 |
An Implementation of Storage-Based Synchronous Remote Mirroring for SANs Remote mirroring ensures that all data written to a primary storage device are also written to a remote secondary storage device to support disaster recoverability. In this study, we designed and implemented a storage-based synchronous remote mirroring for SAN-attached storage nodes. Taking advantage of the high bandwidth and long-distance linking ability of dedicated fiber connections, this approach provides a consistent and up-to-date copy in a remote location to meet the demand for disaster recovery. This system has no host or application overhead, and it is also independent of the actual storage unit. In addition, we present a disk failover solution. The performance results indicate that the bandwidth of the storage node with mirroring under a heavy load was 98.67% of the bandwidth without mirroring, which was only a slight performance loss. This means that our synchronous remote mirroring has little impact on the host's average response time and the actual bandwidth of the storage node. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
When Acyclicity Is Not Enough: Limitations of the Causal Graph. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Early Response to False Claims in Wikipedia | Wikipedia risks | The Orbiten Free Software Survey | Cultural Differences in Collaborative Authoring of Wikipedia This article explores the relationship between national culture and computer-mediated communication (CMC) in Wikipedia. The articles on the topic game from the French, German, Japanese, and Dutch Wikipedia websites were Studied using content analysis methods. Correlations were investigated between patterns of contributions and the four dimensions of cultural influences proposed by Hofstede (Power Distance, Collectivism versus Individualism, Femininity versus Masculinity, and Uncertainty Avoidance). The analysis revealed cultural differences in the style of contributions across the cultures investigated, some of which are correlated with the dimensions identified by Hofstede. These findings suggest that cultural differences that are observed in the physical world also exist in the virtual world. | An empirical examination of Wikipedia's credibility | Logic Programming and Negation: A Survey. We survey here various approaches which were proposed to incorporate negation in logicprograms. We concentrate on the proof-theoretic and model-theoretic issues and the relationshipsbetween them.1991 Mathematics Subject Classification: 68Q40, 68T15.CR Categories: F.3.2., F.4.1, H.3.3, I.2.3.Keywords and Phrases: negation, general logic programs, non-monotonic reasoning.Notes. The work of the first author was partly supported by ESPRIT Basic Research Action6810 (Compulog 2). The work... | On the Desirability of Acyclic Database Schemes A class of database schemes, called acychc, was recently introduced. It is shown that this class has a number of desirable properties. In particular, several desirable properties that have been studied by other researchers m very different terms are all shown to be eqmvalent to acydicity. In addition, several equivalent charactenzauons of the class m terms of graphs and hypergraphs are given, and a smaple algorithm for determining acychclty is presented. Also given are several eqmvalent characterizations of those sets M of multivalued dependencies such that M is the set of muRlvalued dependencies that are the consequences of a given join dependency. Several characterizations for a conflict-free (in the sense of Lien) set of muluvalued dependencies are provided. | Consensus and Cooperation in Networked Multi-Agent Systems? This paper provides a theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow, robustness to changes in network topology due to link/node failures, time-delays, and performance guarantees. An overview of basic concepts of information consensus in networks and methods of convergence and performance analys... | Expressiveness and tractability in knowledge representation and reasoning | Serverless network file systems We propose a new paradigm for network file system design: serverless network file systems. While traditional network file systems rely on a central server machine, a serverless system utilizes workstations cooperating as peers to provide all file system services. Any machine in the system can store, cache, or control any block of data. Our approach uses this location independence, in combination with fast local area networks, to provide better performance and scalability than traditional file systems. Furthermore, because any machine in the system can assume the responsibilities of a failed component, our serverless design also provides high availability via redundatn data storage. To demonstrate our approach, we have implemented a prototype serverless network file system called xFS. Preliminary performance measurements suggest that our architecture achieves its goal of scalability. For instance, in a 32-node xFS system with 32 active clients, each client receives nearly as much read or write throughput as it would see if it were the only active client. | Comparative Evaluation of Latency Tolerance Techniques for Software Distributed Shared Memory A key challenge in achieving high performance on software DSMs is overcoming their relatively large communication latencies. In this paper, we consider two techniques which address this problem: prefetching and multithreading. While previous studies have examined each of these techniques in isolation, this paper is the first to evaluate both techniques using a consistent hardware platform and set of applications, thereby allowing direct comparisons. In addition, this is the first study to consider combining prefetching and multithreading in a software DSM. We performed our experiments on real hardware using a full implementation of both techniques. Our experimental results demonstrate that both prefetching and multithreading result in significant performance improvements when applied individually. In addition, we observe that prefetching and multithreading can potentially complement each other by using prefetching to hide memory latency and multithreading to hide synchronization latency. | Phoenix: a safe in-memory file system Phoenix contains two timestamped versions of the in-memory file system allowing for a reserve version that ensures safety for diskless computers with battery-powered memeory. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1.100326 | 0.100652 | 0.100652 | 0.050531 | 0.03351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Disk Shadowing Disk shadowing is a technique for maintaining a set of two or more identical disk images on separate disk devices. Its primary purpose is to enhance reliability and availability of secondary storage by providing multiple paths to redundant data. However, shadowing can also boost I/O performance. In this paper, we contend that intelligent device scheduling of shadowed discs increases the I/O rate by allowing parallel reads and by substantially reducing the average seek time for random reads. In particular, we develop and analytic model which shows that the seek time for a random read in a shadow set is a monotonic decreasing function of the number of disks. | Dynamic Multi-Resource Load Balancing in Parallel Database Systems | On energy management, load balancing and replication In this paper we investigate some opportunities and challenges that arise in energy-aware computing in a cluster of servers running data-intensive workloads. We leverage the insight that servers in a cluster are often underutilized, which makes it attractive to consider powering down some servers and redistributing their load to others. Of course, powering down servers naively will render data stored only on powered down servers inaccessible. While data replication can be exploited to power down servers without losing access to data, unfortunately, care must be taken in the design of the replication and server power down schemes to avoid creating load imbalances on the remaining "live" servers. Accordingly, in this paper we study the interaction between energy management, load balancing, and replication strategies for data-intensive cluster computing. In particular, we show that Chained Declustering -- a replication strategy proposed more than 20 years ago -- can support very flexible energy management schemes. | Server-based smoothing of variable bit-rate streams We introduce an algorithm that uses buffer space available at the server for smoothing disk transfers of variable bit-rate streams. Previous smoothing techniques prefetched stream data into the client buffer space, instead. However, emergence of personal computing devices with widely different hardware configurations means that we should not always assume abundance of resources at the client side. The new algorithm is shown to have optimal smoothing effect under the specified constraints. We incorporate it into a prototype server, and demonstrate significant increase in the number of streams concurrently supported at different system scales. We also extend our algorithm for striping variable bit-rate streams on heterogeneous disks. High bandwidth utilization is achieved across all the different disks, which leads to server throughput improved by several factors at high loads. | The Mini and Micro Industries First Page of the Article | Disk Mirroring with Alternating Deferred Updates | Database machines: an idea whose time has passed? A critique of the future of database machines | Gracefully degradable disk arrays The problem of designing fault-tolerant disk arrays that are not susceptible to 100% load increases on the functional disks when one of the disks in the system fails is addressed. A technique that combines the advantages of parity schemes and the traditional dual copy methods and offers a wide variety of options in providing fault-tolerance is proposed. A theoretical framework for solving the problem is presented and a number of constructive techniques are proposed. By utilizing the same amount of hardware as the earlier methods but with a better data organization and a different reconstruction technique, the system yields better performance during a failure. Merging two parity groups as a reconfiguration strategy is shown to have a number of benefits, such as reduced hardware overhead and improved reliability. A combination of block designs and the proposed reconfiguration strategy results in a highly reliable disk array with the same or less overhead as the earlier approaches and better performance during a failure.<> | Exposing I/O concurrency with informed prefetching Informed prefetching provides a simple mechanism for I/Q-intensive, cache-ineffective applications to efficiently exploit highly-parallel I/O subsystems such as disk arrays. This mechanism, dynamic disclosure of future accesses, yields substantial benefits over sequential readahead mechanisms found in current file systems for non-sequential workloads. This paper reports the performance of the Transparent Informed Prefetching system (TIP), a minimal prototype implemented in a Mach 3.0 system with up to four disks. We measured reductions by factors of up to 1.9 and 3.7 in the execution time of two example applications: multi-file text search and scientific data visualization. | Microprocessor technology trends The rapid pace of advancement of microprocessor technology has shown no sign of diminishing, and this pace is expected to continue in the future. Recent trends in such areas as silicon technology, processor architecture and implementation, system organization, buses, higher levels of integration, self-testing, caches, coprocessors, and fault tolerance are discussed, and expectations for further ad... | Dynamic resource allocation for database servers running on virtual storage As increasing amounts of valuable information are produced and persist digitally, the ability to determine the origin of data becomes important. In science, medicine, commerce, and government, data provenance tracking is essential for rights protection, ... | Contingent planning under uncertainty via stochastic satisfiability We describe two new probabilistic planning techniques-- c-MAXPLAN and ZANDER--that generate contingent plans in probabilistic propositional domains. Both operate by transforming the planning problem into a stochastic satisfiability problem and solving that problem instead. C-MAXPLAN encodes the problem as an E-MAJSAT instance, while ZANDER encodes the problem as an S-SAT instance. Although S-SAT problems are in a higher complexity class than E-MAJSAT problems, the problem encodings produced by ZANDER are substantially more compact and appear to be easier to solve than the corresponding E-MAJSAT encodings. Preliminary results for ZANDER indicate that it is competitive with existing planners on a variety of problems. | Learning action strategies for planning domains This paper reports on experiments where techniques of supervised machine learning areapplied to the problem of planning. The input to the learning algorithm is composedof a description of a planning domain, planning problems in this domain, and solutionsfor them. The output is an efficient algorithm --- a strategy --- for solving problems inthat domain. We test the strategy on an independent set of planning problems fromthe same domain, so that success is measured by its ability to solve... | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1.00525 | 0.00492 | 0.004458 | 0.003945 | 0.003725 | 0.002691 | 0.00196 | 0.001082 | 0.000303 | 0.000043 | 0.000003 | 0 | 0 | 0 |
Integrated prefetching and caching in single and parallel disk systems We study integrated prefetching and caching in single and parallel disk systems. There exist two very popular approximation algorithms called Aggressive and Conservative for minimizing the total elapsed time in the single disk problem. For D parallel disks, approximation algorithms are known for both the elapsed time and stall time performance measures. In particular, there exists a D-approximation algorithm for the stall time measure that uses D-1 additional memory locations in cache.In the first part of the paper we investigate approximation algorithms for the single disk problem. We give a refined analysis of the Aggressive algorithm, showing that the original analysis was too pessimistic. We prove that our new bound is tight. Additionally we present a new family of prefetching and caching strategies and give algorithms that perform better than Aggressive and Conservative.In the second part of the paper we investigate the problem of minimizing stall time in parallel disk systems. We present a polynomial time algorithm for computing a prefetching/caching schedule whose stall time is bounded by that of an optimal solution. The schedule uses at most 3(D-1) extra memory locations in cache. This is the first polynomial time algorithm for computing schedules with a minimum stall time. Our algorithm is based on the linear programming approach of [1]. However, in order to achieve minimum stall times, we introduce the new concept of synchronized schedules in which fetches on the D disks are performed completely in parallel. | OSF/1 Virtual Memory Improvements | Optimizing center performance through coordinated data staging, scheduling and recovery Procurement and the optimized utilization of Petascale supercomputers and centers is a renewed national priority. Sustained performance and availability of such large centers is a key technical challenge significantly impacting their usability. Storage systems are known to be the primary fault source leading to data unavailability and job resubmissions. This results in reduced center performance, partially due to the lack of coordination between I/O activities and job scheduling. In this work, we propose the coordination of job scheduling with data staging/offloading and on-demand staged data reconstruction to address the availability of job input data and to improve center-wide performance. Fundamental to both mechanisms is the efficient management of transient data: in the way it is scheduled and recovered. Collectively, from a center's standpoint, these techniques optimize resource usage and increase its data/service availability. From a user's standpoint, they reduce the job turnaround time and optimize the allocated time usage. | An Algorithm for Optimally Exploiting Spatial and Temporal Locality in Upper Memory Levels In this study, we present an extension of Belady's MIN algorithm that optimally and simultaneously exploits spatial and temporal locality. Thus, this algorithm provides a performance upper bound of upper memory levels. The purpose of this algorithm is to assess current memory optimizations and to evaluate the potential benefits of future optimizations. We formally prove the optimality of this new algorithm with respect to minimizing misses and we show experimentally that the algorithm produces nearly minimum memory traffic on the SPEC95 benchmarks. | Storage-Aware Caching: Revisiting Caching for Heterogeneous Storage Systems Modern storage environments are composed of a variety of devices with different performance characteristics. In this paper we explore storage-aware caching algorithms, in which the file buffer replacement algorithm explicitly accounts for differences in performance across devices. We introduce a new family of storage-aware caching algorithms that partition the cache, with one partition per device. The algorithms set the partition sizes dynamically to balance work across the devices. Through simulation, we show that our storage-aware policies perform similarly to LANDLORD, a cost-aware algorithm previously shown to perform well in Web caching environments. We also demonstrate that partitions can be easily incorporated into the Clock replacement algorithm, thus increasing the likelihood of deploying cost-aware algorithms in modern operating systems. | Second-Level Buffer Cache Management Abstract--Buffer caches are commonly used in servers to reduce the number of slow disk accesses or network messages. These buffer caches form a multilevel buffer cache hierarchy. In such a hierarchy, second-level buffer caches have different access patterns from first-level buffer caches because accesses to a second-level are actually misses from a first-level. Therefore, commonly used cache management algorithms such as the Least Recently Used (LRU) replacement algorithm that work well for single-level buffer caches may not work well for second-level. This paper investigates multiple approaches to effectively manage second-level buffer caches. In particular, it reports our research results in 1) second-level buffer cache access pattern characterization, 2) a new local algorithm called Multi-Queue (MQ) that performs better than nine tested alternative algorithms for second-level buffer caches, 3) a set of global algorithms that manage a multilevel buffer cache hierarchy globally and significantly improve second-level buffer cache hit ratios over corresponding local algorithms, and 4) implementation and evaluation of these algorithms in a real storage system connected with commercial database servers (Microsoft SQL Server and Oracle) running industrial-strength online transaction processing benchmarks. | The Multics Input/Output system An I/0 system has been implemented in the Multics system that facilitates dynamic switching of I/0 devices. This switching is accomplished by providing a general interface for all I/O devices that allows all equivalent operations on different devices to be expressed in the same way. Also particular devices are referenced by symbolic names and the binding of names to devices can be dynamically modified. Available I/0 operations range from a set of basic I/0 calls that require almost no knowledge of the I/O System or the I/0 device being used to fully general calls that permit one to take full advantage of all features of an I/O device but require considerable knowledge of the I/0 System and the device. The I/O System is described and some popular applications of it, illustrating these features, are presented. | Fido: A Cache That Learns to Fetch Accurately fetching data objects or pages in advance of their use is a powerful means of improving performance, but this capability has been difficult to realize. Current OODBs maintain object caches that employ fetch and replacement policies derived from those used for virtual-memory demand paging. These policies usually assume no knowledge of the future. Object cache managers often employ demand fetching combined with data clustering to effect prefetching, but cluster prefetching can be ineffective when the access patterns serviced are incompatible. This paper describes FIDO, an experimental {\em predictive cache} that predicts access for individuals during a session by employing an associative memory to assimilate regularities in the access pattern of an individual over time. By dint of continual training, the associative memory adapts to changes in the database and in the user''s access pattern, enabling on-line access predictions for prefetching. We discuss two salient components of Fido: \begin{enumerate} \item MLP, a replacement policy for managing pre-fetched objects. \item Estimating Prophet, an associative memory that recognizes patterns in access sequences adaptively over time and provides on-line predictions used for prefetching. \end{enumerate} We then present some early simulation thatts which suggest that predictive caching works well, especially for sequential access patterns, and conclude that predictive caching holds great promise. | Improving I/O Performance Using Soft-Qos-Based Dynamic Storage Cache Partitioning Resources are often shared to improve resource utilization and reduce costs. However, not all resources exhibit good performance when shared among multiple applications. The work presented here focuses on effectively managing a shared storage cache. To provide differentiated services to applications exercising a storage cache, we propose a novel scheme that uses curve fitting to dynamically partition the storage cache. Our scheme quickly adapts to application execution, showing increasing accuracy over time. It satisfies application QoS if it is possible to do so, maximizes the individual hit rates of the applications utilizing the cache, and consequently increases the overall storage cache hit rate. Through extensive trace-driven simulation, we show that our storage cache partitioning strategy not only effectively insulates multiple applications from one another but also provides QoS guarantees to applications over a long period of execution time. Using our partitioning strategy, we were able to increase the individual storage cache hit rates of the applications by 67% and 53% over the no-partitioning and equal-partitioning schemes, respectively. Additionally, we improved the overall cache hit rates of the entire storage system by 11% and 12.9% over the no-partitioning and equal-partitioning schemes, respectively, while meeting the QoS goals all the time. | Adaptive block rearrangement An adaptive technique for reducing disk seek times is described. The technique copies frequently referenced blocks from their original locations to reserved space near the middle of the disk. Reference frequencies need not be known in advance. Instead, they are estimated by monitoring the stream of arriving requests. Trace-driven simulations show that seek times can be cut substantially by copying only a small number of blocks using this technique. The technique has been implemented by modifying a UNIX device driver. No modifications are required to the file system that uses the driver. | Hamming Filters: A Dynamic Signature File Organization for Parallel Stores | Representing actions in equational logic programming A sound and complete approach for encoding the action description language A developed by M. Gelfond and V. Lifschitz in an equational logic program is given. Our results allow the comparison of the resource-oriented equational logic based approach and various other methods designed for reasoning about actions, most of them based on variants of the situation calculus, which were also related to the action description language recently. A non-trivial extension of A is proposed which allows to handle uncer- tainty in form of non-deterministic action descriptions, i.e. where actions may have alternative randomized efiects. It is described how the equational logic programming approach forms a sound and complete encoding of this extended action description language AND as well. | SLX—a top-down derivation procedure for programs with explicit negation In this paper we define a sound and (theoretically) complete top-down derivationprocedure for a well-founded semantics of logic programs extended with explicitnegation (WFSX). By its nature, it is amenable to a simple interpreter implementationin Prolog, and readily allows pre-processing into Prolog, showing promise asan efficient basis for further development. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1.041428 | 0.032808 | 0.030001 | 0.025271 | 0.010108 | 0.004411 | 0.00195 | 0.000588 | 0.00006 | 0.00002 | 0.000001 | 0 | 0 | 0 |
Comparing disk scheduling algorithms for VBR data streams We compare a number of disk scheduling algorithms that can be used in a multimedia server for sustaining multiple variable-bit-rate (VBR) data streams. A data stream is sustained by repeatedly fetching a block of data from disk and storing it in a corresponding buffer. For each of the disk scheduling algorithms we give necessary and sufficient conditions for avoiding underflow and overflow of the buffers. In addition, the algorithms are compared with respect to buffer requirements as well as average response times. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Exploiting Web Log Mining for Web Cache Enhancement Improving the performance of the Web is a crucial requirement, since its popularity resulted in a large increase in the user perceived latency. In this paper, we describe a Web caching scheme that capitalizes on prefetching. Prefetching refers to the mechanism of deducing forthcoming page accesses of a client, based on access log information. Web log mining methods are exploited to provide effective prediction of Web-user accesses. The proposed scheme achieves a coordination between the two techniques (i.e., caching and prefetching). The prefetched documents are accommodated in a dedicated part of the cache, to avoid the drawback of incorrect replacement of requested documents. The requirements of the Web are taken into account, compared to the existing schemes for buffer management in database and operating systems. Experimental results indicate the superiority of the proposed method compared to the previous ones, in terms of improvement in cache performance. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | Parameterized complexity for the database theorist | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Pushing the envelope: planning, propositional logic, and stochastic search Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning problems many times faster than the best current planning systems. Although stochastic methods have been shown to be very effective on a wide range of scheduling problems, this is the first demonstration of its power on truly challenging classical planning instances. This work also provides a new perspective on representational issues in planning. | Mapping conformant planning into SAT through compilation and projection Conformant planning is a variation of classical AI planning where the initial state is partially known and actions can have non-deterministic effects. While a classical plan must achieve the goal from a given initial state using deterministic actions, a conformant plan must achieve the goal in the presence of uncertainty in the initial state and action effects. Conformant planning is computationally harder than classical planning, and unlike classical planning, cannot be reduced polynomially to SAT (unless P = NP). Current SAT approaches to conformant planning, such as those considered by Giunchiglia and colleagues, thus follow a generate-and-test strategy: the models of the theory are generated one by one using a SAT solver (assuming a given planning horizon), and from each such model, a candidate conformant plan is extracted and tested for validity using another SAT call. This works well when the theory has few candidate plans and models, but otherwise is too inefficient. In this paper we propose a different use of a SAT engine where conformant plans are computed by means of a single SAT call over a transformed theory. This transformed theory is obtained by projecting the original theory over the action variables. This operation, while intractable, can be done efficiently provided that the original theory is compiled into d–DNNF (Darwiche 2001), a form akin to OBDDs (Bryant 1992). The experiments that are reported show that the resulting compile-project-sat planner is competitive with state-of-the-art optimal conformant planners and improves upon a planner recently reported at ICAPS-05. | Learning to Take Actions We formalize a model for supervised learning ofaction strategies in dynamic stochastic domains and show that PAC-learning results on Occam algorithms hold in this model as well. We then identify a class of rule-based action strategies for which polynomial time learning is possible. The representation of strategies is a generalization of decision lists;strategies include rules with existentially quantified conditions,simple recursive predicates, and small internal state,but are syntactically restricted.We also study the learnability of hierarchically composed strategies wherea subroutine already acquired can be used as a basic action in a higherlevel strategy. We prove some positive results in this setting,but also show that in some cases the hierarchical learning problem is computationally hard. | Logic programming and knowledge representation-the A-prolog perspective In this paper we give a short introduction to logic programming approach to knowledge representation and reasoning. The intention is to help the reader to develop a `feel' for the field's history and some of its recent developments. The discussion is mainly limited to logic programs under the answer set semantics. For understanding of approaches to logic programming built on well-founded semantics, general theories of argumentation, abductive reasoning, etc., the reader is referred to other publications. | Linear Time Near-Optimal Planning in the Blocks World This paper reports an analysis of near-optimal Blocks World planning. Various methods are clarified, and their time complexity is shown to be linear in the num- ber of blocks, which improves their known complexity bounds. The speed of the implemented programs (ten thousand blocks are handled in a second) enables us to make empirical observations on large problems. These suggest that the above methods have very close aver- age performance ratios, and yield a rough upper bound on those ratios well below the worst case of 2. F'ur- ther, they lead to the conjecture that in the limit the simplest linear time algorithm could be just as good on average as the optimal one. | Bounded Branching and Modalities in Non-Deterministic Planning. We study the consequences on complexity that arise whenbounds on the number of branch points on the solutions fornon-deterministic planning problems are imposed as well aswhen modal formulae are introduced into the description language.New planning tasks, such as whether there exists aplan with at most k branch points for a fully (or partially)observable non-deterministic domain, and whether there existsa no-branch (a.k.a. conformant) plan for partially observabledomains, are introduced and their complexity analyzed.Among other things, we show that deciding the existenceof a conformant plan for partially observable domains withmodal formulae is 2EXPSPACE-complete, and that the problemof deciding the existence of plans with bounded branching,for fully or partially observable contingent domains,has the same complexity of the conformant task. These resultsgeneralize previous results on the complexity of nondeterministicplanning and fill a slot that has gone unnoticedin non-deterministic planning, that of conformant planningfor partially observable domains. | Recent Advances in AI Planning The past five years have seen dramatic advances in planning algorithms, with an emphasis on propositional methods such as Graphplan and compilers that convert planning problems into propositional CNF formulae for solution via systematic or stochastic SAT methods. Related work on the Deep Space One spacecraft control algorithms advances our understanding of interleaved planning and execution. In this survey,we explain the latest techniques and suggest areas for future research. | Compilation Schemes: A Theoretical Tool for Assessing the Expressive Power of Planning Formalisms The recent approaches of extending the graphplan algorithm to handle more expressive planning formalisms raise the question of what the formal meaning of "expressive power" is. We formalize the intuition that expressive power is a measure of how concisely planning domains and plans can be expressed in a particular formalism by introducing the notion of "compilation schemes" between planning formalisms. Using this notion, we analyze the expressive power of a large family of propositional planning formalisms and show, e.g., that Gazen and Kno-block's approach to compiling conditional effects away is optimal. | Complexity results for blocks-world planning Although blocks-world planning is well-known, its complexity has not previously been analyzed, and different planning researchers have expressed conflicting opinions about its difficulty. In this paper, we present the following results: 1. Finding optimal plans in a well-known formulation of the blocks-world planning domain is NP-hard, even if the goal state is completely specified. 2. Classical examples of deleted-condition interactions such as Sussman's anomaly and creative destruction are not difficult to handle in this domain, provided that the right planning algorithm is used. Instead, the NP-hardness of the problem results from difficulties in determining which of several different actions will best help to achieve multiple goals. | How long will it take? We present a method for approximating the expected number of steps required by a heuristic search algorithm to reach a goal from any initial state in a problem space. The method is based on a mapping from the original state space to an abstract space in which states are characterized only by a syntactic "distance" from the nearest goal. Modeling the search algorithm as a Markov process in the abstract space yields a simple system of equations for the solution time for each state. We derive some insight into the behavior of search algorithms by examining some closed form solutions for these equations; we also show that many problem spaces have a clearly delineated "easy zone", inside which problems are trivial and outside which problems are impossible. The theory is borne out by experiments with both Markov and non-Markov search algorithms. Our results also bear on recent experimental data suggesting that heuristic repair algorithms can solve large constraint satisfaction problems easily, given a preprocessor that generates a sufficiently good initial state. | Reasoning about nondeterministic and concurrent actions: a process algebra approach We present a framework for reasoning about processes (complex actions) that are constituted by several concurrent activities performed by various interacting agents, The framework is based on two distinct formalisms: a representation formalism, which is a CCS-like process algebra associated with an explicit global store; and a reasoning formalism, which is an extension of modal mu-calculus, a powerful logic of programs that subsumes dynamic logics such as PDL and Delta PDL, and branching temporal logics such as CTL and CTL*. The reasoning service of interest in this setting is model checking in contrast to logical implication. This framework, although directly applicable only when complete information on the system behavior is available, has several interesting features for reasoning about actions in Artificial Intelligence. Indeed, it inherits formal and practical tools from the area of Concurrency in Computer Science, to deal with complex actions, treating suitably aspects like nonterminating executions, parallelism communications, and interruptions. (C) 1999 Published by Elsevier Science B.V. All rights reserved. | Object Recognition from Local Scale-Invariant Features An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection.These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales.The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds. | LH*g: a high-availability scalable distributed data structure by record grouping LH*g (Linear Hashing by grouping) is a high-availability extension of the LH* scalable distributed data structure. An LH*g file scales up with constant key search and insert performance, while surviving any single-site unavailability (failure). We achieve high availability through a new principle of record grouping. A group is a logical structure of up to k records, where k is a file parameter. Every group contains a parity record allowing for the reconstruction of an unavailable member. The basic scheme may be generalized to support the unavailability of any number of sites, at the expense of storage and messaging. Other known high-availability schemes are static, or require more storage, or provide worse search performance | Unsupervised (Parameter) Learning For Mrfs On Bipartite Graphs We consider unsupervised (parameter) learning for general Markov random fields on bipartite graphs. This model class includes Restricted Boltzmann Machines. We show that besides the widely used stochastic gradient approximation (a.k.a. Persistent Contrastive Divergence) there is an alternative learning approach - a modified EM algorithm which is tractable because of the bipartiteness of the model graph. We compare the resulting double loop algorithm and the PCD learning experimentally and show that the former converges faster and more stable than the latter. | 1.002629 | 0.005475 | 0.003838 | 0.00258 | 0.002251 | 0.001967 | 0.001606 | 0.000958 | 0.000469 | 0.000099 | 0.000007 | 0 | 0 | 0 |
Increasing distributed storage survivability with a stackable RAID-like file system We have designed a stackable file system called Redundant Array of Independent Filesystems (RAIF). It combines the data survivability properties and performance benefits of traditional RAIDs with the unprecedented flexibility of composition, improved security, and ease of development of stackable file systems. RAIF can be mounted on top of any combination of other file systems including network, distributed, disk-based, and memory-based file systems. Existing encryption, compression, antivirus, and consistency checking stackable file systems can be mounted above and below RAIF, to efficiently cope up with slow or unsecure branches. Individual files can be distributed across branches, replicated, stored with parity, or stored with erasure correction coding to recover from failures on multiple branches. Per-file incremental recovery, storage type migration, and load-balancing are especially well suited for grid storages. In this paper, we describe the current RAIF design, provide preliminary performance results and discuss current status and future directions. | The design and implementation of an extensible network backup system in realtime This paper proposes a backup system based on mirroring filesystem "GMFS." GMFS has been developed to mirror data in realtime on the filesystem layer. The GMFS is a stackable filesystem which flexibly mirrors without changing the existing environment by operating as a wrapper of other filesystems. Because the conventional mirroring technology utilizes the mirroring function on the device layer or needs a special filesystem, the allocation of the disk and the specific format of the filesystem are needed, and so the disk design is fixed. Therefore, the conventional mirroring technology cannot adjust when the mirroring function not assumed will be needed later. In this situation, the mechanism that adds the mirroring function without changing the existing disk design is necessary. The GMFS conducts the operation of other filesystems transparently, thereby users need not be aware of the GMFS. The conventional filesystem looks as if it performs mirroring data by itself. GMFS can therefore add the function that mirrors in realtime without destroying the existing environment. GMFS uses NFS which is a typical network file system to communicate with an existing environment. The throughput of reading and writing has been improved by adopting the method to call system function of NFS from the inside of the filesystem. We developed this filesystem, and evaluated the performance from the viewpoint of throughput and system call speed and CPU loads. As a result, it was shown that there was no problem in the viewpoint of the performance compared with the conventional filesystem, and the throughput of the read and write of GMFS was 2.0 times faster than conventional mirroring filesystem. | A new approach to I/O performance evaluation: self-scaling I/O benchmarks, predicted I/O performance Current I/O benchmarks suffer from several chronic problems: they quickly become obsolete, they do not stress the I/O system, and they do not help in understanding I/O system performance. We propose a new approach to I/O performance analysis. First, we propose a self-scaling benchmark that dynamically adjusts aspects of its workload according to the performance characteristic of the system being measured. By doing so, the benchmark automatically scales across current and future systems. The evaluation aids in understanding system performance by reporting how performance varies according to each of fie workload parameters. Second, we propose predicted performance, a technique for using the results from the self-scaling evaluation to quickly estimate the performance for workloads that have not been measured. We show that this technique yields reasonably accurate performance estimates and argue that this method gives a far more accurate comparative performance evaluation than traditional single point benchmarks. We apply our new evaluation technique by measuring a SPARCstation 1+ with one SCSI disk, an HP 730 with one SCSI-II disk, a Sprite LFS DECstation 5000/200 with a three-disk disk array, a Convex C240 minisupercomputer with a four-disk disk array, and a Solbourne 5E/905 fileserver with a two-disk disk array. | File system aging—increasing the relevance of file system benchmarks Benchmarks are important because they provide a means for users and researchers to characterize how their workloads will perform on different systems and different system architectures. The field of file system design is no different from other areas of research in this regard, and a variety of file system benchmarks are in use, representing a wide range of the different user workloads that may be run on a file system. A realistic benchmark, however, is only one of the tools that is required in order to understand how a file system design will perform in the real world. The benchmark must also be executed on a realistic file system. While the simplest approach may be to measure the performance of an empty file system, this represents a state that is seldom encountered by real users. In order to study file systems in more representative conditions, we present a methodology for aging a test file system by replaying a workload similar to that experienced by a real file system over a period of many months, or even years. Our aging tools allow the same aging workload to be applied to multiple versions of the same file system, allowing scientific evaluation of the relative merits of competing file system designs.In addition to describing our aging tools, we demonstrate their use by applying them to evaluate two enhancements to the file layout policies of the UNIX fast file system. | Stupid file systems are better File systems were originally designed for hosts with only one disk. Over the past 20 years, a number of increasingly complicated changes have optimized the performance of file systems on a single disk. Over the same time, storage systems have advanced on their own, separated from file systems by the narrow block interface. Storage systems have increasingly employed parallelism and virtualization. Parallelism seeks to increase throughput and strengthen fault-tolerance. Virtualization employs additional levels of data addressing indirection to improve system flexibility and lower administration costs. Do the optimizations of file systems make sense for current storage systems? In this paper, I show that the performance of a current advanced local file system is sensitive to the virtualization parameters of its storage system. Sometimes random block layout outperforms smart file system layout. In addition, random block layout stabilizes performance across several virtualization parameters. This approach has the advantage of immunizing file systems to changes in their underlying storage systems. | A trace-driven analysis of the UNIX 4.2 BSD file system | Volume Managers in Linux A volume manager is a subsystem for online disk storage management which has become a de-facto standard across UNIX implementations and is a serious enabler for Linux in the enterprise computing area. It adds an additional layer between the physical peripherals and the I/O interface in the kernel to present a logical view of disks, unlike current partition schemes where disks are divided into fixed-size sections.In addition to providing a logical level of management, a volume manager will often implement one or more levels of software RAID to improve performance or reliability. Advanced logical management tools and software RAID axe the specialties of the Logical Volume Manager (LVM) and Multiple. Devices (MD) drivers respectively. These are the two most widely used Linux volume managers today.This paper describes the current technologies available in Linux and new work in the area of volume management. | Hot Mirroring: A Study to Hide Parity Upgrade Penalty and Degradations During Rebuilds for RAID5 | An evaluation of buffer management strategies for relational database systems In this paper we present a new algorithm, DBMIN, for managing the buffer pool of a relational database management system. DBMIN is based on a new model of relational query behavior, the (QLSM). Like the hot set model, the QLSM has an advantage over the stochastic models due to its ability to predict future reference behavior. However, the QLSM avoids the potential problems of the hot set model by separating the modeling of reference behavior from any particular buffer management algorithm. After introducing the QLSM and describing the DBMIN algorithm, we present a performance evaluation methodology for evaluating buffer management algorithms in a multiuser environment. This methodology employed a hybrid model that combines features of both trace-driven and distribution-driven simulation models. Using this model, the performance of the DBMIN algorithm in a multiuser environment is compared with that of the hot set algorithm and four more traditional buffer replacement algorithms. | Conquest: Better Performance Through a Disk/Persistent-RAM Hybrid File System The rapidly declining cost of persistent RAM technologies, prompts the question of when, not whether, such memory will become the preferred storage medium for many computers. Conquest is a file system that provides a transition from disk to persistent RAM as the primary storage medium. Conquest provides two specialized and simplified data paths to in-core and on-disk storage, and Conquest realizes most of the benefits of persistent RAM at a fractional cost of a RAM-only solution. As of October 2001, Conquest can be used effectively for a hardware cost of under $200.We compare Conquest's performance to ext2, reiserfs, SGI XFS, and ramfs, using popular benchmarks. Our measurements show that Conquest incurs little overhead compared to ramfs. Compared to the disk-based file systems, Conquest achieves 24% to 1900% faster memory performance, and 43% to 96% faster performance when exercising both memory and disk. | An Efficient Unification Algorithm | CP-nets: a tool for representing and reasoning with conditional ceteris paribus preference statements Information about user preferences plays a key role in automated decision making. In many domains it is desirable to assess such preferences in a qualitative rather than quantitative way. In this paper, we propose a qualitative graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is often compact and arguably quite natural in many circumstances. We provide a formal semantics for this model, and describe how the structure of the network can be exploited in several inference tasks, such as determining whether one outcome dominates (is preferred to) another, ordering a set outcomes according to the preference relation, and constructing the best outcome subject to available evidence. | So Many WAM Variations, So Little Time The WAM allows within its framework many variations e.g. regarding the term representation, the instruction set and the memory organization. Consequently several Prolog systems have implemented successful variants of the WAM. While these variants are effective within their own context, it is difficult to assess the merit of their particular variation. In this work, four term representations that were used by at least one successful system are compared empirically within dProlog, one basic implementation which keeps all other things equal. We also report on different implementation choices in the dProlog emulator itself. dProlog is reasonably efficient, so it makes sense to use it for these experiments. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.035705 | 0.022222 | 0.01989 | 0.012307 | 0.005733 | 0.00188 | 0.000163 | 0.000031 | 0.000013 | 0.000003 | 0 | 0 | 0 | 0 |
Shade: Information-Based Regularization For Deep Learning Regularization is a big issue for training deep neural networks. In this paper, we propose a new information-theory-based regularization scheme named SHADE for SHAnnon DEcay. The originality of the approach is to define a prior based on conditional entropy, which explicitly decouples the learning of invariant representations in the regularizer and the learning of correlations between inputs and labels in the data fitting term. Our second contribution is to derive a stochastic version of the regularizer compatible with deep learning, resulting in a tractable training scheme. We empirically validate the efficiency of our approach to improve classification performances compared to standard regularization schemes on several standard architectures. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Long-distance mutual exclusion for propositional planning The use of mutual exclusion (mutex) has led to significant advances in propositional planning. However, previous mutex can only detect pairs of actions or facts that cannot be arranged at the same time step. In this paper, we introduce a new class of constraints that significantly generalizes mutex and can be efficiently computed. The proposed long-distance mutual exclusion (londex) can capture constraints over actions and facts not only at the same time step but also across multiple steps. Londex provides a powerful and general approach for improving planning efficiency. As an application, we have integrated londex into SATPLAN04, a leading optimal planner. Experimental results show that londex can effectively prune the search space and reduce the planning time. The resulting planner, MaxPlan, has won the First Place Award in the Optimal Track of the 5th International Planning Competition. | Recognizing frozen variables in constraint satisfaction problems In constraint satisfaction problems over finite domains, some variables can be frozen, that is, they take the same value in all possible solutions. We study the complexity of the problem of recognizing frozen variables with restricted sets of constraint relations allowed in the instances. We show that the complexity of such problems is determined by certain algebraic properties of these relations. Under the assumption that NP ≠ coNP (and consequently PTIME ≠ NP), we characterize all tractable problems, and describe large classes of NP-complete, coNP-complete, and DP-complete problems. As an application of these results, we completely classify the complexity of the problem in two cases: (1) with domain size 2; and (2) when all unary relations are present. We also give a rough classification for domain size 3. | Fusing procedural and declarative planning goals for nondeterministic domains While in most planning approaches goals and plans are different objects, it is often useful to specify goals that combine declarative conditions with procedural plans. In this paper, we propose a novel language for expressing temporally extended goals for planning in nondeterministic domains. The key feature of this language is that it allows for an arbitrary combination of declarative goals expressed in temporal logic and procedural goals expressed as plan fragments. We provide a formal definition of the language and its semantics, and we propose an approach to planning with this language in nondeterministic domains. We implement the planning framework and perform a set of experimental evaluations that show the potentialities of our approach. | An LP-based heuristic for optimal planning One of the most successful approaches in automated planning is to use heuristic state-space search. A popular heuristic that is used by a number of state-space planners is based on relaxing the planning task by ignoring the delete effects of the actions. In several planning domains, however, this relaxation produces rather weak estimates to guide search effectively. We present a relaxation using (integer) linear programming that respects delete effects but ignores action ordering, which in a number of problems provides better distance estimates. Moreover, our approach can be used as an admissible heuristic for optimal planning. | In defense of PDDL axioms There is controversy as to whether explicit support for PDDL-like axioms and derived predicates is needed for planners to handle real-world domains effectively. Many researchers have deplored the lack of precise semantics for such axioms, while others have argued that it might be best to compile them away. We propose an adequate semantics for PDDL axioms and show that they are an essential feature by proving that it is impossible to compile them away if we restrict the growth of plans and domain descriptions to be polynomial. These results suggest that adding a reasonable implementation to handle axioms inside the planner is beneficial for the performance. Our experiments confirm this suggestion. | Planning as satisfiability | Complexity, decidability and undecidability results for domain-independent planning In this paper, we examine how the complexity of domain-independent planning with STRIPS-style operators depends on the nature of the planning operators. We show conditions under which planning is decidable and undecidable. Our results on this topic solve an open problem posed by Chapman (5), and clear up some diculties with his undecidability theorems. | Cost-Sharing Approximations for h Relaxations based on (either complete or partial) ignor- ing delete effects of the actions provide the basis for some seminal classical planning heuristics. However, the palette of the conceptual tools exploited by these heuristics remains rather limited. We study a framework for approximating the optimal cost solutions for prob- lems with no delete effects that bridges between cer- tain works on heuristic search for probabilistic reason- ing and classical planning. In particular, this framework generalizes some previously known, as well as suggests some novel, tools for heuristic estimates for Strips plan- ning. | A linear programming heuristic for optimal planning I introduce a new search heuristic for propositional STRIPS planning that is based on transforming planning instances to linear programming instances. The linear programming heuristic is admissible for finding minimum length plans and can be used by partial-order planning algorithms. This heuristic appears to be the first non-trivial admissible heuristic for partial-order planning. An empirical study compares Lplan, a partial-order planner incorporating the heuristic, to Graphplan, Satplan, and UCPOP on the tower of Hanoi domain, random blocks-world instances, and random planning instances. Graphplan is far faster in the study than the other algorithms. Lplan is often slower because the heuristic is time-consuming, but Lplan shows promise because it often perfonns a small search. | Mapping conformant planning into SAT through compilation and projection Conformant planning is a variation of classical AI planning where the initial state is partially known and actions can have non-deterministic effects. While a classical plan must achieve the goal from a given initial state using deterministic actions, a conformant plan must achieve the goal in the presence of uncertainty in the initial state and action effects. Conformant planning is computationally harder than classical planning, and unlike classical planning, cannot be reduced polynomially to SAT (unless P = NP). Current SAT approaches to conformant planning, such as those considered by Giunchiglia and colleagues, thus follow a generate-and-test strategy: the models of the theory are generated one by one using a SAT solver (assuming a given planning horizon), and from each such model, a candidate conformant plan is extracted and tested for validity using another SAT call. This works well when the theory has few candidate plans and models, but otherwise is too inefficient. In this paper we propose a different use of a SAT engine where conformant plans are computed by means of a single SAT call over a transformed theory. This transformed theory is obtained by projecting the original theory over the action variables. This operation, while intractable, can be done efficiently provided that the original theory is compiled into d–DNNF (Darwiche 2001), a form akin to OBDDs (Bryant 1992). The experiments that are reported show that the resulting compile-project-sat planner is competitive with state-of-the-art optimal conformant planners and improves upon a planner recently reported at ICAPS-05. | Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev... | Self Supervised Boosting Boosting algorithms and successful applications thereof abound for clas- sification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a ran- dom field model by training them to improve classification performance between the data and an equal-sized sample of "negative examples" gen- erated from the model's current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative examples from the model's current Boltzmann distribution. Next, a fea- ture is trained to improve classification performance between data and negative examples. Finally, a coefficient is learned which determines the importance of this feature relative to ones already in the pool. Negative examples only need to be generated once to learn each new feature. The validity of the approach is demonstrated on binary digits and continuous synthetic data. | Beyond striping: the bridge multiprocessor file system High-performance parallel computers require high-performance file systems. Exotic I/O hardware will be of little use if file system software runs on a single processor of a many-processor machine. We believe that cost-effective I/O for large multiprocessors can best be obtained by spreading both data and file system computation over a large number of processors and disks. To assess the effectiveness of this approach, we have implemented a prototype system called Bridge, and have studied its performance on several data intensive applications, among them external sorting. A detailed analysis of our sorting algorithm indicates that Bridge can profitably be used on configurations in excess of one hundred processors with disks. Empirical results on a 32-processor implementation agree with the analysis, providing us with a high degree of confidence in this prediction. Based on our experience, we argue that file systems such as Bridge will satisfy the I/O needs of a wide range of parallel architectures and applications. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.068321 | 0.070392 | 0.033333 | 0.0127 | 0.007496 | 0.002358 | 0.000857 | 0.000249 | 0.000062 | 0.000005 | 0 | 0 | 0 | 0 |
Faster and Accurate Compressed Video Action Recognition Straight from the Frequency Domain Human action recognition has become one of the most active field of research in computer vision due to its wide range of applications, like surveillance, medical, industrial environments, smart homes, among others. Recently, deep learning has been successfully used to learn powerful and interpretable features for recognizing human actions in videos. Most of the existing deep learning approaches have been designed for processing video information as RGB image sequences. For this reason, a preliminary decoding process is required, since video data are often stored in a compressed format. However, a high computational load and memory usage is demanded for decoding a video. To overcome this problem, we propose a deep neural network capable of learning straight from compressed video. Our approach was evaluated on two public benchmarks, the UCF-101 and HMDB-51 datasets, demonstrating comparable recognition performance to the state-of-the-art methods, with the advantage of running up to 2 times faster in terms of inference speed. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Incremental learning by message passing in hierarchical temporal memory Hierarchical temporal memory HTM is a biologically inspired framework that can be used to learn invariant representations of patterns in a wide range of applications. Classical HTM learning is mainly unsupervised, and once training is completed, the network structure is frozen, thus making further training i.e., incremental learning quite critical. In this letter, we develop a novel technique for HTM incremental supervised learning based on gradient descent error minimization. We prove that error backpropagation can be naturally and elegantly implemented through native HTM message passing based on belief propagation. Our experimental results demonstrate that a two-stage training approach composed of unsupervised pretraining and supervised refinement is very effective both accurate and efficient. This is in line with recent findings on other deep architectures. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Low-Density Triple-Erasure Correcting Codes for Dependable Distributed Storage Systems This paper presents simulations of 3 different implementations of the minority-3 function, with special focus on mismatch analysis through statistical Monte Carlo-simulations. The simulations clearly favors the minority-3 Mirrored gate, and a gate-level ... | New Efficient MDS Array Codes for RAID Part I: Reed-Solomon-Like Codes for Tolerating Three Disk Failures This paper presents a class of binary Maximum Distance Separable (MDS) array codes for tolerating disk failures in Redundant Arrays of Inexpensive Disks (RAID) architecture based on circular permutation matrices. The size of the information part is m \times n, the size of the parity-check part is m \times 3, and the minimum distance is 4, where n is the number of information disks, the number of parity-check disks is 3, and (m+1) is a prime integer. In practical applications, m can be very large and n is from 20 to 50. The code rate is R = {\frac{n}{n+3}}. These codes can be used for tolerating three disk failures. The encoding and decoding of the Reed-Solomon-like codes are very fast. There need to be 3mn XOR operations for encoding and (3mn+9(m+1)) XOR operations for decoding. | New Efficient MDS Array Codes for RAID Part II: Rabin-Like Codes for Tolerating Multiple (greater than or equal to 4) Disk Failures A new class of Binary Maximum Distance Separable (MDS) array codes which are based on circular permutation matrices are introduced in this paper. These array codes are used for tolerating multiple (greater than or equal to 4) disk failures in Redundant Arrays of Inexpensive Disks (RAID) architecture. The size of the information part is m \times n, where n is the number of information disks and (m+1) is a prime integer; the size of the parity-check part is m \times r, the minimum distance is r+1, and the number of parity-check disks is r. In practical applications, m can be very large and n ranges from 20 to 50. The code rate is R = {\frac{n}{n+r}}. These codes can be used for tolerating up to r disk failures, with very fast encoding and decoding. The complexities of encoding and decoding algorithms are O(rmn) and O(m^3r^4), respectively. When r=4, there need to be 9mn XOR operations for encoding and (9n+95)(m+1) XOR operations for decoding. | A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle. | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days. | Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented. | Efficient sparse coding algorithms Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that cap- ture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimiza- tion problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field sur- round suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons. | Synchronized Disk Interleaving A group of disks may be interleaved to speed up data transfers in a manner analogous to the speedup achieved by main memory interleaving. Conventional disks may be used for interleaving by spreading data across disks and by treating multiple disks as if they were a single one. Furthermore, the rotation of the interleaved disks may be synchronized to simplify control and also to optimize performance. In addition, check- sums may be placed on separate check-sum disks in order to improve reliability. In this paper, we study synchronized disk interleaving as a high-performance mass storage system architecture. The advantages and limitations of the proposed disk interleaving scheme are analyzed using the M/G/1 queueing model and compared to the conventional disk access mechanism. | Downward Separation Fails Catastrophically for Limited Nondeterminism Classes The $\beta$ hierarchy consists of classes $\beta_k={\rm NP}[logkn]\subseteq {\rm NP}$. Unlike collapses in the polynomial hierarchy and the Boolean hierarchy, collapses in the $\beta$ hierarchy do not seem to translate up, nor does closure under complement seem to cause the hierarchy to collapse. For any consistent set of collapses and separations of levels of the hierarchy that respects ${\rm P} = \beta_1\subseteq \beta_2\subseteq \cdots \subseteq {\rm NP}$, we can construct an oracle relative to which those collapses and separations hold; at the same time we can make distinct levels of the hierarchy closed under computation or not, as we wish. To give two relatively tame examples: for any $k \geq 1$, we construct an oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} \neq \beta_{k+2} \neq \cdots \] and another oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} = {\rm PSPACE}. \] We also construct an oracle relative to which $\beta_{2k} = \beta_{2k+1} \neq \beta_{2k+2}$ for all k. | Diagnostic reasoning with A-Prolog In this paper, we suggest an architecture for a software agent which operates a physical device and is capable of making observations and of testing and repairing the device's components. We present simplified definitions of the notions of symptom, candidate diagnosis, and diagnosis which are based on the theory of action language ${\cal AL}$. The definitions allow one to give a simple account of the agent's behavior in which many of the agent's tasks are reduced to computing stable models of logic programs. | ARIMA time series modeling and forecasting for adaptive I/O prefetching Bursty application I/O patterns, together with transfer limited storage devices, combine to create a major I/O bottleneck on parallel systems. This paper explores the use of time series models to forecast application I/O request times, then prefetching I/O requests during computation intervals to hide I/O latency. Experimental results with I/O intensive scientific codes show performance improvements compared to standard UNIX prefetching strategies. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.2 | 0.015385 | 0.014286 | 0.000219 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Configuring Storage Area Networks for Mandatory Security Storage-area networks are a popular and ecien t way of building large storage systems both in an enterprise environment and for multi-domain storage service providers. In both environments the network and the storage has to be congured to ensure that the data is maintained se- curely and can be delivered ecien tly. In this paper we describe a model of mandatory security for multi-domain storage services that is exible enough to reect the data requirements, tractable for the administrator, and implementable as part of an automatic conguration system. We describe the model abstractly, its implementation as part of a prototype SAN conguration system written in OPL, and illustrate its operation on a set of sample congurations. | A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Logic programs with classical negation | The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz. | Solving Advanced Reasoning Tasks Using Quantified Boolean Formulas We consider the compilation of different reasoning tasks into the evaluation problem of quantified boolean formulas (QBFs) as an approach to develop prototype reasoning sys- tems useful for, e.g., experimental purposes. Such a method is a natural generalization of a similar technique applied to NP-problems and has been recently proposed by other re- searchers. More specifically, we present translations of sev- eral well-known reasoning tasks from the area of nonmono- tonic reasoning into QBFs, and compare their implementa- tion in the prototype system QUIP with established NMR- provers. The results show reasonable performance, and docu- ment that the QBF approach is an attractive tool for rapid pro- totyping of experimental knowledge-representation systems. | Object Recognition from Local Scale-Invariant Features An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection.These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales.The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.2 | 0.000219 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Semantic email: theory and applications This paper investigates how the vision of the Semantic Web can be carried over to the realm of email. We introduce a general notion of semantic email, in which an email message consists of a structured query or update coupled with corresponding explanatory text. Semantic email opens the door to a wide range of automated, email-mediated applications with formally guaranteed properties. In particular, this paper introduces a broad class of semantic email processes. For example, consider the process of sending an email to a program committee, asking who will attend the PC dinner, automatically collecting the responses, and tallying them up. We define both logical and decision-theoretic models where an email process is modeled as a set of updates to a data set on which we specify goals via certain constraints or utilities. We then describe a set of inference problems that arise while trying to satisfy these goals and analyze their computational tractability. In particular, we show that for the logical model it is possible to automatically infer which email responses are acceptable w.r.t. a set of constraints in polynomial time, and for the decision-theoretic model it is possible to compute the optimal message-handling policy in polynomial time. In addition, we show how to automatically generate explanations for a process's actions, and identify cases where such explanations can be generated in polynomial time. Finally, we discuss our publicly available implementation of semantic email and outline research challenges in this realm. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
A Denotational Semantics for First-Order Logic In Apt and Bezem [AB99] we provided a computational interpretation of first-order formulas over arbitrary interpretations. Here we complement this work by introducing a denotational semantics for first-order logic. Additionally, by allowing an assignment of a nonground term to a variable we introduce in this framework logical variables. The semantics combines a number of well-known ideas from the areas of semantics of imperative programming languages and logic programming. In the resulting computational view conjunction corresponds to sequential composition, disjunction to "don't know" nondeterminism, existential quantification to declaration of a local variable, and negation to the "negation as finite failure" rule. The soundness result shows correctness of the semantics with respect to the notion of truth. The proof resembles in some aspects the proof of the soundness of the SLDNF-resolution. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. | An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
On the complexity of unique solutions We show that the problem of deciding whether an instance of the traveling salesman problem has a uniquely optimal solution is complete for Δ2P. | The Unique Horn-Satisfiability Problem and Quadratic Boolean Equations The unique satisfiability problem for general Boolean expressions has attracted interest in recent years in connection with basic complexity issues [12,13]. We investigate here Unique Horn-Satisfiability, i.e. the subclass of Unique-Sat restricted to Horn expressions. We introduce two operators,reduction andshrinking, each transforming a given Horn expression into another Horn expression involving strictly fewer variables and preserving the unique satisfiability property, if present. | Abduction in Well-Founded Semantics and Generalized Stable Models Abductive logic programming offers a formalism to declaratively express and
solve problems in areas such as diagnosis, planning, belief revision and
hypothetical reasoning. Tabled logic programming offers a computational
mechanism that provides a level of declarativity superior to that of Prolog,
and which has supported successful applications in fields such as parsing,
program analysis, and model checking. In this paper we show how to use tabled
logic programming to evaluate queries to abductive frameworks with integrity
constraints when these frameworks contain both default and explicit negation.
The result is the ability to compute abduction over well-founded semantics with
explicit negation and answer sets. Our approach consists of a transformation
and an evaluation method. The transformation adjoins to each objective literal
$O$ in a program, an objective literal $not(O)$ along with rules that ensure
that $not(O)$ will be true if and only if $O$ is false. We call the resulting
program a {\em dual} program. The evaluation method, \wfsmeth, then operates on
the dual program. \wfsmeth{} is sound and complete for evaluating queries to
abductive frameworks whose entailment method is based on either the
well-founded semantics with explicit negation, or on answer sets. Further,
\wfsmeth{} is asymptotically as efficient as any known method for either class
of problems. In addition, when abduction is not desired, \wfsmeth{} operating
on a dual program provides a novel tabling method for evaluating queries to
ground extended programs whose complexity and termination properties are
similar to those of the best tabling methods for the well-founded semantics. A
publicly available meta-interpreter has been developed for \wfsmeth{} using the
XSB system. | From Disjunctive Programs to Abduction . The purpose of this work is to clarify the relationship betweenthree approaches to representing incomplete information in logicprogramming. Classical negation and epistemic disjunction are used inthe first of these approaches, abductive logic programs with classicalnegation in the second, and a simpler form of abductive logic programming--- without classical negation --- in the third. In the literature, theseideas have been illustrated with examples related to properties of actions,and ... | Bounded query computations A survey is given of directions, results, and methods in the study of complexity-bounded computations with a restricted number of queries to an oracle. In particular, polynomial-time-bounded computations with an NP oracle are considered. The main topics are: the relationship between the number of adaptive and parallel queries, connections to the closure of NP under polynomial-time truth-table reducibility, the Boolean hierarchy, the power of one more query, sparse oracles versus few queries, and natural complete problems for the most important bounded query classes | Action Languages Action languages are formal models of parts of the natural languagethat are used for talking about the effects of actions. This article is acollection of definitions related to action languages that may be usefulas a reference in future publications.1 IntroductionThis article is a collection of definitions related to action languages. Itdoes not provide a comprehensive discussion of the subject, and does notcontain a complete bibliography, but it may be useful as a reference in... | Relating equivalence and reducibility to sparse sets For various polynomial-time reducibilities r, the authors ask whether being r-reducible to a sparse set is a broader notion than being r-equivalent to a sparse set. Although distinguishing equivalence and reducibility to sparse sets, for many-one or 1-truth-table reductions, would imply that P≠NP, the authors show that for k-truth-table reductions, k⩾2, equivalence and reducibility to sparse sets provably differ. Though R. Gavalda and D. Watanabe have shown that, for any polynomial-time computable unbounded function f(·), some sets f(n)-truth-table reducible to sparse sets are not even Turing equivalent to sparse sets, the authors show that extending their result to the 2-truth-table case would provide a proof that P≠NP. Additionally, the authors study the relative power of different notions of reducibility and show that disjunctive and conjunctive truth-table reductions to sparse sets are surprisingly powerful, refuting a conjecture of K. Ko (1989) | The Logarithmic Alternation Hierarchy Collapses: A \sum^\calL_2=APi^\calL_2 | Saving queries with randomness In this paper, we investigate the power of randomness to save a query to an NP-complete set. We show that the P SAT ∥ [ k ] ≤ p m -complete language randomly reduces to a language in P SAT ∥ [ k − 1] with a one-sided error probability of 1/⌈ k /2⌉ or a two sided-error probability of 1/( k +1). Furthermore, we prove that these probability bounds are tight; i.e., they cannot be improved by 1/poly, unless PH collapses. We also obtain tight performance bounds for randomized reductions between nearby classes in the Boolean and bounded query hierarchies. These bounds provide probability thresholds for completeness under randomized reductions in these classes. Using these thresholds, we show that certain languages in the Boolean hierarchy which are not ≤ p m -complete in some relativized worlds, nevertheless inherit many of the hardness properties associated with the ≤ p m -complete languages. Finally, we explore the relationship between randomization and functions that are computable using bounded queries to SAT. For any function h ( n ) = O (log n ), we show that there is a function f computable using h ( n ) nonadaptive queries to SAT, which cannot be computed correctly with probability 1/2 + 1/poly by any randomized machine which makes less than h ( n ) adaptive queries to any oracle, unless PH collapses. | Modal Tableaux for Reasoning About Actions and Plans In this paper we investigate tableau proof procedures for reasoning about actions and plans. Our framework is a multimodal language close to that of propositional dynamic logic, wherein we solve the frame problem by introducing the notion of dependence as a weak causal connection between actions and atoms. The tableau procedure is sound and complete for an important fragment of our language, within which all standard problems of reasoning about actions can be expressed, in particular planning... | Efficient Temporal Reasoning In The Cached Event Calculus This article deals with the problem of providing Kowalski and Sergot's event calculus, extended with context dependency, with an efficient implementation in a logic programming framework. Despite a widespread recognition that a positive solution to efficiency issues is necessary to guarantee the computational feasibility of existing approaches to temporal reasoning, the problem of analyzing the complexity of temporal reasoning programs has been largely overlooked. This article provides a mathematical analysis of the efficiency of query and update processing in the event calculus and defines a cached version of the calculus that (i) moves computational complexity from query to update processing and (ii) features an absolute improvement of performance, because query processing in the event calculus costs much more than update processing in the proposed cached version. | AFRAID: a frequently redundant array of independent disks Disk arrays are commonly designed to ensure that stored data will always be able to withstand a disk failure, but meeting this goal comes at a significant cost in performance. We show that this is unnecessary. By trading away a fraction of the enormous reliability provided by disk arrays, it is possible to achieve performance that is almost as good as a non-parity-protected set of disks. In particular, our AFRAID design eliminates the small-update penalty that plagues traditional RAID 5 disk arrays. It does this by applying the data update immediately, but delaying the parity update to the next quiet period between bursts of client activity. That is, AFRAID makes sure that the array is frequently redundant, even if it isn't always so. By regulating the parity update policy, AFRAID allows a smooth trade-off between performance and availability. Under real-life workloads, the AFRAID design can provide close to the full performance of an array of unprotected disks, and data availability comparable to a traditional RAID 5. Our results show that AFRAID offers 42% better performance for only 10% less availability, 97% better for 23% less, and as much as a factor of 4.1 times better performance for giving up less than half RAID 5's availability. We explore here the detailed availability and performance implications of the AFRAID approach. | P-Selectivity, immunity, and the power of one bit We prove that P-sel, the class of all P-selective sets, is EXP-immune, but is not EXP/1-immune. That is, we prove that some infinite P-selective set has no infinite EXP-time subset, but we also prove that every infinite P-selective set has some infinite subset in EXP/1. Informally put, the immunity of P-sel is so fragile that it is pierced by a single bit of information. The above claims follow from broader results that we obtain about the immunity of the P-selective sets. In particular, we prove that for every recursive function f, P-sel is DTIME(f)-immune. Yet we also prove that P-sel is not ${\it \Pi}^{p}_{2}$/1-immune. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.033648 | 0.06656 | 0.032 | 0.0064 | 0.002295 | 0.001 | 0.000222 | 0.000094 | 0.000013 | 0 | 0 | 0 | 0 | 0 |
The anomalous extension problem in default reasoning In their recent celebrated paper, Hanks and McDermott presented a simple problem in temporal reasoning which showed that a seemingly natural representation of a frame axiom in nonmonotonic logic can give rise to an anomalous extension, i.e., one which is counter-intuitive in that it does not appear to be supported by the known facts. | Default Theory for Well Founded Semantics with Explicit Negation One aim of this paper is to define a default theory for Well Founded Semantics of logic programs which have been extended with explicit negation, such that the models of a program correspond exactly to the extensions of the default theory corresponding to the program. | Logic Programming and Reasoning with Incomplete Information The purpose of this paper is to expand the syntax and semanticsof logic programs and disjunctive databases to allow for the correctrepresentation of incomplete information in the presence of multipleextensions. The language of logic programs with classical negation,epistemic disjunction, and negation by failure is further expanded bynew modal operators K and M (where for the set of rules T and formulaF , KF stands for "F is known to be true by a reasoner with a set ofpremises T " and MF ... | A monotonicity theorem for extended logic programs Because general and extended logic programs behave nonmonotonically, itis in general difficult to predict how even minor changes to such programswill affect their meanings. This paper shows that for a restricted class ofextended logic programs --- those with signings --- it is possible to state afairly general theorem comparing the entailments of programs. To this end,we generalize (to the class of extended logic programs) the definition of asigning, first formulated by Kunen for general ... | Efficient Temporal Reasoning In The Cached Event Calculus This article deals with the problem of providing Kowalski and Sergot's event calculus, extended with context dependency, with an efficient implementation in a logic programming framework. Despite a widespread recognition that a positive solution to efficiency issues is necessary to guarantee the computational feasibility of existing approaches to temporal reasoning, the problem of analyzing the complexity of temporal reasoning programs has been largely overlooked. This article provides a mathematical analysis of the efficiency of query and update processing in the event calculus and defines a cached version of the calculus that (i) moves computational complexity from query to update processing and (ii) features an absolute improvement of performance, because query processing in the event calculus costs much more than update processing in the proposed cached version. | From Disjunctive Programs to Abduction . The purpose of this work is to clarify the relationship betweenthree approaches to representing incomplete information in logicprogramming. Classical negation and epistemic disjunction are used inthe first of these approaches, abductive logic programs with classicalnegation in the second, and a simpler form of abductive logic programming--- without classical negation --- in the third. In the literature, theseideas have been illustrated with examples related to properties of actions,and ... | A goal-oriented approach to computing the well-founded semantics Global SLS resolution is an ideal procedural semantics for the well-founded semantics. We present a more effective variant of global SLS resolution, called XOLDTNF resolution, which incorporates simple mechanisms for loop detection and handling. Termination is guaranteed for all programs with the bounded-term-size property. We establish the soundness and (search space) completeness of XOLDTNF resolution. An implementation of XOLDTNF resolution in Prolog is available via FTP. | Logic programs with stable model semantics as a constraint programming paradigm Logic programming with the stable model semantics is put forward as a novel constraint programming paradigm. This paradigm is interesting because it bring advantages of logic programming based knowledge representation techniques to constraint programming and because implementation methods for the stable model semantics for ground (variable‐free) programs have advanced significantly in recent years. For a program with variables these methods need a grounding procedure for generating a variable‐free program. As a practical approach to handling the grounding problem a subclass of logic programs, domain restricted programs, is proposed. This subclass enables efficient grounding procedures and serves as a basis for integrating built‐in predicates and functions often needed in applications. It is shown that the novel paradigm embeds classical logical satisfiability and standard (finite domain) constraint satisfaction problems but seems to provide a more expressive framework from a knowledge representation point of view. The first steps towards a programming methodology for the new paradigm are taken by presenting solutions to standard constraint satisfaction problems, combinatorial graph problems and planning problems. An efficient implementation of the paradigm based on domain restricted programs has been developed. This is an extension of a previous implementation of the stable model semantics, the Smodels system, and is publicly available. It contains, e.g., built‐in integer arithmetic integrated to stable model computation. The implementation is described briefly and some test results illustrating the current level of performance are reported. | Multi-agent Cooperation: A Description Logic View In this paper we propose dynamic description logic for formalizing multi-agent cooperation process with a clearly defined syntax and semantics. By combining the features of knowledge representation and reasoning of description logic and action theory for multi-agent interaction, our logic is effective and significant both for static and dynamic environment. On the static side, we employ description logic for the representation and reasoning of beliefs and goals. On the dynamic side, we adopt the object-oriented method to describe actions. The description of each action is composed of models, preconditions and effects. It can reflect the real changes of the world and is very suitable for belief revision and action planning. Based on our logic, we investigate how to form joint goal for multi-agent cooperation. In particular, we propose an effective dynamic planning algorithm for scheduling sub goals, which is greatly crucial for coordinating multi-agent behaviors. | On Computing Solutions to Belief Change Scenarios Belief change scenarios were recently introduced as a framework for expressing different forms of belief change. In this paper, we show how belief revision and belief contraction (within belief change scenarios) can be axiomatised by means of quantified Boolean formulas. This approach has several benefits. First, it furnishes an axiomatic specification of belief change within belief change scenarios. Second, this axiomatisation allows us to identify upper bounds for the complexity of revision and contraction within belief change scenarios.We strengthen these upper bounds by providing strict complexity results for the considered reasoning tasks. Finally, we obtain an implementation of different forms. of belief change by appeal to the existing system QUIP. | QuBE++: An Efficient QBF Solver In this paper we describe QuBE++, an efficient solver for Quantified Boolean Formulas (QBFs). To the extent of our knowledge, QUBE++ is the first QBF reasoning engine that uses lazy data structures both for unit clauses propagation and for pure literals detection. QuBE++ also features non-chronological backtracking and a branching heuristic that leverages the information gathered during the backtracking phase. Owing to such techniques and to a careful implementation, QuBE++ turns out to be an efficient and robust solver, whose performances exceed those of other state-of-the-art QBF engines, and are comparable with the best engines currently available on SAT instances. | B-tree indexes for high update rates In some applications, data capture dominates query processing. For example, monitoring moving objects often requires more insertions and updates than queries. Data gathering using automated sensors often exhibits this imbalance. More generally, indexing streams is considered an unsolved problem.For those applications, B-tree indexes are good choices if some trade-off decisions are tilted towards optimization of updates rather than towards optimization of queries. This paper surveys some techniques that let B-trees sustain very high update rates, up to multiple orders of magnitude higher than traditional B-trees, at the expense of query processing performance. Not surprisingly, some of these techniques are reminiscent of those employed during index creation, index rebuild, etc., while other techniques are derived from well known technologies such as differential files and log-structured file systems. | When Are Behaviour Networks Well-Behaved? Agents operating in the real world have to deal with a constantly changing and only partially predictable environment and are nevertheless expected to choose reasonable actions quickly. This problem is addressed by a number of action-selection mechanisms. Behaviour networks as proposed by Maes are one such mechanism, which is quite popular. In general, it seems not possible to predict when behaviour networks are well-behaved. However, they perform quite well in the robotic soccer context. In this paper, we analyse the reason for this success by identifying conditions that make behaviour networks goal converging, i.e., force them to reach the goals regardless of the details of the action selection scheme. In terms of STRIPS domains one could talk of self-solving planning domains. | Unsupervised (Parameter) Learning For Mrfs On Bipartite Graphs We consider unsupervised (parameter) learning for general Markov random fields on bipartite graphs. This model class includes Restricted Boltzmann Machines. We show that besides the widely used stochastic gradient approximation (a.k.a. Persistent Contrastive Divergence) there is an alternative learning approach - a modified EM algorithm which is tractable because of the bipartiteness of the model graph. We compare the resulting double loop algorithm and the PCD learning experimentally and show that the former converges faster and more stable than the latter. | 1.020083 | 0.008614 | 0.008491 | 0.006987 | 0.006709 | 0.003626 | 0.002495 | 0.001353 | 0.000128 | 0.000028 | 0 | 0 | 0 | 0 |
Über die Analyse randomisierter Suchheuristiken und den Entwurf spezialisierter Algorithmen im Bereich der kombinatorischen Optimierung. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Extended stable semantics for normal and disjunctive programs | The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified. | Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter. | Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up. | Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism. | Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition. | Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio. | Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. | An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle. | Exploring Gate-Limited Analytical Models for High Performance Network Storage Servers | Parameterized complexity for the database theorist | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Two components of an action language Some of the recent work on representing action makes use of high‐level action languages. In this paper we show that an action language can be represented as the sum of two distinct parts: an “action description language” and an “action query language.” A set of propositions in an action description language describes the effects of actions on states. Mathematically, it defines a transition system of the kind familiar from the theory of finite automata. An action query language serves for expressing properties of paths in a given transition system. We define the general concepts of a transition system, of an action description language and of an action query language, give a series of examples of languages of both kinds, and show how to combine a description language and a query language into one. This construction makes it possible to design the two components of an action language independently, which leads to the simplification and clarification of the theory of actions. | Hypothesizing about signaling networks The current knowledge about signaling networks is largely incomplete. Thus biologists constantly need to revise or extend existing knowledge. The revision and/or extension is first formulated as theoretical hypotheses, then verified experimentally. Many computer-aided systems have been developed to assist biologists in undertaking this challenge. The majority of the systems help in finding “patterns” in data and leave the reasoning to biologists. A few systems have tried to automate the reasoning process of hypothesis formation. These systems generate hypotheses from a knowledge base and given observations. A main drawback of these knowledge-based systems is the knowledge representation formalism they use. These formalisms are mostly monotonic and are now known to be not quite suitable for knowledge representation, especially in dealing with the inherently incomplete knowledge about signaling networks. We propose an action language based framework for hypothesis formation for signaling networks. We show that the hypothesis formation problem can be translated into an abduction problem. This translation facilitates the complexity analysis and an efficient implementation of our system. We illustrate the applicability of our system with an example of hypothesis formation in the signaling network of the p53 protein. | Cognitive Technical Systems -- What Is the Role of Artificial Intelligence? The newly established cluster of excellence CoTeSysinvestigates the realization of cognitive capabilities such as perception, learning, reasoning, planning, and execution for technical systems including humanoid robots, flexible manufacturing systems, and autonomous vehicles. In this paper we describe cognitive technical systems using a sensor-equipped kitchen with a robotic assistant as an example. We will particularly consider the role of Artificial Intelligence in the research enterprise.Key research foci of Artificial Intelligence research in CoTeSysinclude (茂戮驴) symbolic representations grounded in perception and action, (茂戮驴) first-order probabilistic representations of actions, objects, and situations, (茂戮驴) reasoning about objects and situations in the context of everyday manipulation tasks, and (茂戮驴) the representation and revision of robot plans for everyday activity. | Wire Routing and Satisfiability Planning Wire routing is the problem of determining the physical locations of all the wires interconnecting the circuit components on a chip. Since the wires cannot intersect with each other, they are competing for limited spaces, thus making routing a difficult combinatorial optimization problem. We present a new approach to wire routing that uses action languages and satisfiability planning. Its idea is to think of each path as the trajectory of a robot, and to understand a routing problem as the problem of planning the actions of several robots whose paths are required to be disjoint. The new method differs from the algorithms implemented in the existing routing systems in that it always correctly determines whether a given problem is solvable, and it produces a solution whenever one exists. | Modeling Biological Networks by Action Languages via Answer Set Programming We describe an approach to modeling biological networks by action languages via answer set programming. To this end, we propose an action language for modeling biological networks, building on previous work by Baral et al. We introduce its syntax and semantics along with a translation into answer set programming, an efficient Boolean Constraint Programming Paradigm. Finally, we describe one of its applications, namely, the sulfur starvation response-pathway of the model plant Arabidopsis thaliana and sketch the functionality of our system and its usage. | Complexity aspects of various semantics for disjunctive databases This paper addresses complexity issues for important problems arising with disjunctive databases. In particular, the complexity of inference of a literal and a formula from a propositional disjunctive database under a variety of well-known disjunctive database semantics is investigated, as well deciding whether a disjunctive database has a model under a particular semantics. The problems are located in appropriate slots of the polynomial hierarchy. | From logic programming towards multi-agent systems In this paper we present an extension of logic programming (LP) that is suitable not only for the “rational” component of a single agent but also for the “reactive” component and that can encompass multi‐agent systems. We modify an earlier abductive proof procedure and embed it within an agent cycle. The proof procedure incorporates abduction, definitions and integrity constraints within a dynamic environment, where changes can be observed as inputs. The definitions allow rational planning behaviour and the integrity constraints allow reactive, condition‐action type behaviour. The agent cycle provides a resource‐bounded mechanism that allows the agent’s thinking to be interrupted for the agent to record and assimilate observations as input and execute actions as output, before resuming further thinking. We argue that these extensions of LP, accommodating multi‐theories embedded in a shared environment, provide the necessary multi‐agent functionality. We argue also that our work extends Shoham’s Agent0 and the BDI architecture. | Nonmonotonic reasoning in the framework of situation calculus Most of the solutions proposed to the Yale shooting problem haveeither introduced new nonmonotonic reasoning methods (generally involvingtemporal priorities) or completely reformulated the domainaxioms to represent causality explicitly. This paper presents a newsolution based on the idea that since the abnormality predicate takesa situational argument, it is important for the meanings of the situationsto be held constant across the various models being compared.This is accomplished by a... | Reasoning about Complex Actions with Incomplete Knowledge: A Modal Approach In this paper we propose a modal approach for reasoning about dynamic domains in a logic programming setting. We present a logical framework for reasoning about actions in which modal inclusion axioms of the form 驴p0驴驴 驴 驴p1驴 驴p2驴 ... 驴pn驴 allow procedures to be defined for building complex actions from elementary actions. The language is able to handle knowledge producing actions as well as actions which remove information. Incomplete states are represented by means of epistemic operators and test actions can be used to check whether a fluent is true, false or undefined in a state. We give a non-monotonic solution for the frame problem by making use of persistency assumptions in the context of an abductive characterization. A goal directed proof procedure is defined, which allows reasoning about complex actions and generating conditional plans. | Fixed-Parameter Tractability and Completeness I: Basic Results For many fixed-parameter problems that are trivially soluable in polynomial-time, such as ($k$-)DOMINATING SET, essentially no better algorithm is presently known than the one which tries all possible solutions. Other problems, such as ($k$-)FEEDBACK VERTEX SET, exhibit fixed-parameter tractability: for each fixed $k$ the problem is soluable in time bounded by a polynomial of degree $c$, where $c$ is a constant independent of $k$. We establish the main results of a completeness program which addresses the apparent fixed-parameter intractability of many parameterized problems. In particular, we define a hierarchy of classes of parameterized problems $FPT \subseteq W[1] \subseteq W[2] \subseteq \cdots \subseteq W[SAT] \subseteq W[P]$ and identify natural complete problems for $W[t]$ for $t \geq 2$. (In other papers we have shown many problems complete for $W[1]$.) DOMINATING SET is shown to be complete for $W[2]$, and thus is not fixed-parameter tractable unless INDEPENDENT SET, CLIQUE, IRREDUNDANT SET and many other natural problems in $W[2]$ are also fixed-parameter tractable. We also give a compendium of currently known hardness results as an appendix. | Modal Nonmonotonic Logics Revisited: Efficient Encodings for the Basic Reasoning Tasks Modal nonmonotonic logics constitute a well-known family of knowledge-representation formalisms capturing ideally rational agents reasoning about their own beliefs. Although these formalisms are extensively studied from a theoretical point of view, most of these approaches lack generally available solvers thus far. In this paper, we show how variants of Moore's autoepistemic logic can be axiomatised by means of quantified Boolean formulas (QBFs). More specifically, we provide polynomial reductions of the basic reasoning tasks associated with these logics into the evaluation problem of QBFs. Since there are now efficient QBF-solvers, this reduction technique yields a practicably relevant approach to build prototype reasoning systems for these formalisms. We incorporated our encodings within the system QUIP and tested their performance on a class of benchmark problems using different underlying QBF-solvers. | The Swiss-Prot Protein Knowledgebase And Its Supplement Trembl In 2003 The SWISS-PROT protein knowledgebase (http: / / www. expasy. org/ sprot/ and http: / / www. ebi. ac. uk/ swissprot/) connects amino acid sequences with the current knowledge in the Life Sciences. Each protein entry provides an interdisciplinary overview of relevant information by bringing together experimental results, computed features and sometimes even contradictory conclusions. Detailed expertise that goes beyond the scope of SWISS-PROT is made available via direct links to specialised databases. SWISS-PROT provides annotated entries for all species, but concentrates on the annotation of entries from human ( the HPI project) and other model organisms to ensure the presence of high quality annotation for representative members of all protein families. Part of the annotation can be transferred to other family members, as is already done for microbes by the High-quality Automated and Manual Annotation of microbial Proteomes (HAMAP) project. Protein families and groups of proteins are regularly reviewed to keep up with current scientific findings. Complementarily, TrEMBL strives to comprise all protein sequences that are not yet represented in SWISS-PROT, by incorporating a perpetually increasing level of mostly automated annotation. Researchers are welcome to contribute their knowledge to the scientific community by submitting relevant findings to SWISS-PROT at [email protected]. | Reasoning about Duplicate Elimination with Description Logic Queries commonly perform much better if they manage to avoid duplicate elimination operations in their execution plans. In this paper, we report on a technique that provides a necessary and sufficient condition for removing such operators from object relational conjunctive queries under the standard duplicate semantics. The condition is fully captured as a membership problem in a dialect of description logic called CFD, which is capable of expressing a number of common constraints implicit in object relational database schemas. We also present a PTIME algorithm for arbitrary membership problems in CFD. | Improving Citation Polarity Classification With Product Reviews Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification. | 1.019712 | 0.033333 | 0.022222 | 0.021166 | 0.009551 | 0.005962 | 0.002101 | 0.000735 | 0.000046 | 0.000014 | 0 | 0 | 0 | 0 |
On the Desirability of Acyclic Database Schemes A class of database schemes, called acychc, was recently introduced. It is shown that this class has a number of desirable properties. In particular, several desirable properties that have been studied by other researchers m very different terms are all shown to be eqmvalent to acydicity. In addition, several equivalent charactenzauons of the class m terms of graphs and hypergraphs are given, and a smaple algorithm for determining acychclty is presented. Also given are several eqmvalent characterizations of those sets M of multivalued dependencies such that M is the set of muRlvalued dependencies that are the consequences of a given join dependency. Several characterizations for a conflict-free (in the sense of Lien) set of muluvalued dependencies are provided. | Tractable Structures for Constraint Satisfaction with Truth Tables The way the graph structure of the constraints influences the complexity of constraint satisfaction problems (CSP) is well
understood for bounded-arity constraints. The situation is less clear if there is no bound on the arities. In this case the
answer depends also on how the constraints are represented in the input. We study this question for the truth table representation
of constraints. We introduce a new hypergraph measure adaptive width and show that CSP with truth tables is polynomial-time solvable if restricted to a class of hypergraphs with bounded adaptive
width. Conversely, assuming a conjecture on the complexity of binary CSP, there is no other polynomial-time solvable case.
Finally, we present a class of hypergraphs with bounded adaptive width and unbounded fractional hypertree width. | A perspective on assumption-based truth maintenance | Extremal problems in logic programming and stable model computation We study the following problem: given a class of logic programs ¢, determine the maximum number of stable models of a program from ©. We establish the maximum for the class of all logic programs with at most n clauses, and for the class of all logic programs of size at most n. We also characterize the programs for which the maxima are attained. We obtained similar results for the class of all disjunctive logic programs with at most n clauses, each of length at most m, and for the class of all disjunctive logic programs of size at most n. Our results on logic programs have direct implication for the design of algorithms to compute stable models. Several such algorithms, similar in spirit to the Davis-Putnam procedure, are described in the paper. Our results imply that there is an algorithm that finds all stable models of a program with n clauses after considering the search space of size O(3n/3) in the worst case. Our results also provide some insights into the question of representability of families of sets as families of stable models of logic programs. | Tractable Hypergraph Properties for Constraint Satisfaction and Conjunctive Queries An important question in the study of constraint satisfaction problems (CSP) is understanding how the graph or hypergraph describing the incidence structure of the constraints influences the complexity of the problem. For binary CSP instances (that is, where each constraint involves only two variables), the situation is well understood: the complexity of the problem essentially depends on the treewidth of the graph of the constraints [Grohe 2007; Marx 2010b]. However, this is not the correct answer if constraints with unbounded number of variables are allowed, and in particular, for CSP instances arising from query evaluation problems in database theory. Formally, if H is a class of hypergraphs, then let CSP(H) be CSP restricted to instances whose hypergraph is in H. Our goal is to characterize those classes of hypergraphs for which CSP(H) is polynomial-time solvable or fixed-parameter tractable, parameterized by the number of variables. Note that in the applications related to database query evaluation, we usually assume that the number of variables is much smaller than the size of the instance, thus parameterization by the number of variables is a meaningful question. The most general known property of H that makes CSP(H) polynomial-time solvable is bounded fractional hypertree width. Here we introduce a new hypergraph measure called submodular width, and show that bounded submodular width of H (which is a strictly more general property than bounded fractional hypertree width) implies that CSP(H) is fixed-parameter tractable. In a matching hardness result, we show that if H has unbounded submodular width, then CSP(H) is not fixed-parameter tractable (and hence not polynomial-time solvable), unless the Exponential Time Hypothesis (ETH) fails. The algorithmic result uses tree decompositions in a novel way: instead of using a single decomposition depending on the hypergraph, the instance is split into a set of instances (all on the same set of variables as the original instance), and then the new instances are solved by choosing a different tree decomposition for each of them. The reason why this strategy works is that the splitting can be done in such a way that the new instances are “uniform” with respect to the number extensions of partial solutions, and therefore the number of partial solutions can be described by a submodular function. For the hardness result, we prove via a series of combinatorial results that if a hypergraph H has large submodular width, then a 3SAT instance can be efficiently simulated by a CSP instance whose hypergraph is H. To prove these combinatorial results, we need to develop a theory of (multicommodity) flows on hypergraphs and vertex separators in the case when the function b(S) defining the cost of separator S is submodular, which can be of independent interest. | On the complexity of database queries We revisit the issue of the complexity of database queries, in the light of the recent parametric refinement of com- plexity theory. We show that, if the query size (or the number of variables in the query) is considered as a parameter, then the relational calculus and its frag- ments (conjunctive queries, positive queries) are classi- fied at appropriate levels of the so-called W hierarchy of Downey and Fellows. These results strongly suggest that the query size is inherently in the exponent of the data complexity of any query evaluation algorithm, with the implication becoming stronger as the expressibility of the query language increases. For recursive languages (fixpoint logic, Datalog) this is provably the case (14). On the positive side, we show that this exponential de- pendence can be avoided for the extension of acyclic queries with # (but not <) inequalities. | A linear time algorithm for finding tree-decompositions of small treewidth In this paper, we give for constant k a linear-time algorithm that, given a graph G = (V, E), determines whether the treewidth of G is at most k and, if so, finds a tree-decomposition of G with treewidth at most k. A consequence is that every minor-closed class of graphs that does not contain all planar graphs has a linear-time recognition algorithm. Another consequence is that a similar result holds when we look instead for path-decompositions with pathwidth at mast some constant k. | QUBOS: Deciding Quantified Boolean Logic Using Propositional Satisfiability Solvers We describe Qubos (QUantified BOolean Solver), a decision procedure for quantified Boolean logic. The procedure is based on nonclausal simplification techniques that reduce formulae to a propositional clausal form after which off-the-shelf satisfiability solvers can be employed. W e show that there are domains exhibiting structure for which this procedure is very effective and we report on experimental results. | Parallel non-binary planning in polynomial time This paper formally presents a class of planning problems which allows non-binary state variables and parallel execution of actions. The class is proven to be tractable, and we provide a sound and complete polynomial time algorithm for planning within this class. This result means that we are getting closed to tackling realistic planning problems in sequential control, where a restricted problem representation is often sufficient, but where the size of the problems make tractability an important issue. | On the Complexity of Plan Adaptation by Derivational Analogy in a Universal Classical Planning Framework In this paper we present an algorithm called DerUCP, which can be regarded as a general model for plan adaptation using Derivational Analogy. Using DerUCP, we show that previous results on the complexity of plan adaptation do not apply to Derivational Analogy. We also show that Derivational Analogy can potentially produce exponential reductions in the size of the search space generated by a planning system. | Beyond NP: Arc-Consistency for Quantified Constraints The generalization of the satisfiability problem with arbitrary quantifiers is a challenging problem of both theoretical and practical relevance. Being PSPACE-complete, it provides a canonical model for solving other PSPACE tasks which naturally arise in AI.Effective SAT-based solvers have been designed very recently for the special case of boolean constraints. We propose to consider the more general problem where constraints are arbitrary relations over finite domains. Adopting the viewpoint of constraint-propagation techniques so successful for CSPs, we provide a theoretical study of this problem. Our main result is to propose quantified arc-consistency as a natural extension of the classical CSP notion. | Introduction to the special issue on summarization As the amount of on-line information increases, systems that can automatically sum-marize one or more documents become increasingly desirable. Recent research has investigated types of summaries, methods to create them, and methods to evaluate them. Several evaluation competitions (in the style of the National Institute of Stan-dards and Technologyís [NISTís] Text Retrieval Conference [TREC]) have helped de-termine baseline performance levels and provide a limited set of training material. Frequent workshops and symposia reflect the ongoing interest of researchers around the world. The volume of papers edited by Mani and Maybury (1999) and a book (Mani 2001) provide good introductions to the state of the art in this rapidly evolving subfield. A summary can be loosely defined as a text that is produced from one or more texts, that conveys important information in the original text(s), and that is no longer than half of the original text(s) and usually significantly less than that. Text here is used rather loosely and can refer to speech, multimedia documents, hypertext, etc. The main goal of a summary is to present the main ideas in a document in less space. If all sentences in a text document were of equal importance, producing a sum-mary would not be very effective, as any reduction in the size of a document would carry a proportional decrease in its informativeness. Luckily, information content in a document appears in bursts, and one can therefore distinguish between more and less informative segments. Identifying the informative segments at the expense of the rest is the main challenge in summarization. Of the many types of summary that have been identified (Borko and Bernier 1975; Cremmins 1996; Sparck Jones 1999; Hovy and Lin 1999), indicative summaries provide an idea of what the text is about without conveying specific content, and informative ones provide some shortened version of the content. Topic-oriented summaries con-centrate on the readerís desired topic(s) of interest, whereas generic summaries reflect the authorís point of view. Extracts are summaries created by reusing portions (words, sentences, etc. ) of the input text verbatim, while abstracts are created by regenerating | A Logical Framework to Reinforcement Learning Using Hybrid Probabilistic Logic Programs Knowledge representation is an important issue in reinforcement learning. Although logic programming with answer set semantics is a standard in knowledge representation, it has not been exploited in reinforcement learning to resolve its knowledge representation issues. In this paper, we present a logic programming framework to reinforcement learning, by integrating reinforcement learning, in MDP environments, with normal hybrid probabilistic logic programs with probabilistic answer set semantics [29], that is capable of representing domain-specific knowledge. We show that any reinforcement learning problem, MT, can be translated into a normal hybrid probabilistic logic program whose probabilistic answer sets correspond to trajectories in MT. We formally prove the correctness of our approach. Moreover, we show that the complexity of finding a policy for a reinforcement learning problem in our approach is NP-complete. In addition, we show that any reinforcement learning problem, MT, can be encoded as a classical logic program with answer set semantics, whose answer sets corresponds to valid trajectories in MT. We also show that a reinforcement learning problem can be encoded as a SAT problem. In addition, we present a new high level action description language that allows the factored representation of MDP. | Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year. | 1.038299 | 0.032055 | 0.029068 | 0.029068 | 0.016027 | 0.013138 | 0.003016 | 0.000083 | 0.000018 | 0.000005 | 0 | 0 | 0 | 0 |
Some notes on the two-prime generator of order 2 The two-prime generator of order 2 has several desirable randomness properties if the two primes are chosen properly. In particular, Ding deduced exact formulas for the (periodic) autocorrelation and the linear complexity of these sequences. In this note, we analyze parts of the period of the two-prime generator of order 2 and obtain bounds on the aperiodic autocorrelation and linear complexity profile. | Low-density parity-check matrices for coding of correlated sources Linear codes for a coding problem of correlated sources are considered. It is proved that we can construct codes by using low-density parity-check (LDPC) matrices with maximum-likelihood (or typical set) decoding. As applications of the above coding problem, a construction of codes is presented for multiple-access channel with correlated additive noises and a coding theorem of parity-check codes for general channels is proved. | Autocorrelation Of Modified Legendre-Sidelnikov Sequences In this paper, we modify the Legendre-Sidelnikov sequence which was defined by M. Su and A. Winterhof and consider its exact autocorrelation values. This new sequence is balanced for any p, q and proved to possess low autocorrelation values in most | Correlation of the two-prime Sidel'nikov sequence Motivated by the concepts of Sidel'nikov sequences and two-prime generator (or Jacobi sequences) we introduce and analyze some new binary sequences called two-prime Sidel'nikov sequences. In the cases of twin primes and cousin primes equivalent 3 modulo 4 we show that these sequences are balanced. In the general case, besides balancedness we also study the autocorrelation, the correlation measure of order k and the linear complexity profile of these sequences showing that they have many nice pseudorandom features. | A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. | Empirical Analysis of Predictive Algorithms for Collaborative Filtering Collaborative filtering or recommender systemsuse a database about user preferences topredict additional topics or products a newuser might like. In this paper we describe several algorithms designed for this task, including techniques based on correlation coefficients,vector-based similarity calculations,and statistical Bayesian methods. We comparethe predictive accuracy of the various methods in a set of representative problemdomains. We use two basic classes of evaluation... | Predicting individual disease risk based on medical history The monumental cost of health care, especially for chronic disease treatment, is quickly becoming unmanageable. This crisis has motivated the drive towards preventative medicine, where the primary concern is recognizing disease risk and taking action at the earliest signs. However, universal testing is neither time nor cost efficient. We propose CARE, a Collaborative Assessment and Recommendation Engine, which relies only on a patient's medical history using ICD-9-CM codes in order to predict future diseases risks. CARE uses collaborative filtering to predict each patient's greatest disease risks based on their own medical history and that of similar patients. We also describe an Iterative version, ICARE, which incorporates ensemble concepts for improved performance. These novel systems require no specialized information and provide predictions for medical conditions of all kinds in a single run. We present experimental results on a Medicare dataset, demonstrating that CARE and ICARE perform well at capturing future disease risks. | Real-time multimedia systems The expansion of multimedia networks and systems depends on real-time support for media streams and interactive multimedia services. Multimedia data are essentially continuous, heterogeneous, and isochronous, three characteristics with strong real-time implications when combined. At the same time, some multimedia services, like video-on-demand or distributed simulation, are real-time applications with sophisticated temporal functionalities in their user interface. We analyze the main problems in building such real-time multimedia systems, and we discuss-under an architectural prospect-some technological solutions especially those regarding determinism and efficient synchronization in the storage, processing, and communication of audio and video data | NP is as easy as detecting unique solutions For all known NP-complete problems the number of solutions in instances having solutions may vary over an exponentially large range. Furthermore, most of the well-known ones, such as satisfiability, are parsimoniously interreducible, and these can have any number of solutions between zero and an exponentially large number. It is natural to ask whether the inherent intractability of NP-complete problems is caused by this wide variation. In this paper we give a negative answer to this using randomized reductions. We show that the problems of distinguishing between instances of SAT having zero or one solution, or finding solutions to instances of SAT having unique solutions, are as hard as SAT itself. Several corollaries about the difficulty of specific problems follow. For example if the parity of the number of solutions of SAT can be computed in RP then NP = RP. Some further problems can be shown to be hard for NP or DP via randomized reductions. | Planning as search: a quantitative approach We present the thesis that planning can be viewed as problem-solving search using subgoals, macro-operators, and abstraction as knowledge sources. Our goal is to quantify problem-solving performance using these sources of knowledge. New results include the identification of subgoal distance as a fundamental measure of problem difficulty, a multiplicative time-space tradeoff for macro-operators, and an analysis of abstraction which concludes that abstraction hierarchies can reduce exponential problems to linear complexity. | Simultaneous Pipelining in QPipe: Exploiting Work Sharing Opportunities Across Queries Data warehousing and scientific database applications operate on massive datasets and are characterized by complex queries accessing large portions of the database. Concurrent queries often exhibit high data and computation overlap, e.g., they access the same relations on disk, compute similar aggregates, or share intermediate results. Unfortunately, run-time sharing in modern database engines is limited by the paradigm of invoking an independent set of operator instances per query, potentially missing sharing opportunities if the buffer pool evicts data early. | Reasoning About Actions in Narrative Understanding Reasoning about actions has been a focus of interest in AI from the beginning and continues to receive attention. Rut the range of situations considered has been rather narrow and falls well short of what is needed for understanding natural language. Language understanding requires sophisticated reasoning about actions and events and the world's languages employ a variety of grammatical and lexical devices to construe, direct attention and focus on, and control inferences about actions and events. We implemented a neurally inspired computational model that is able to reason about, linguistic action and event descriptions, such as those found in news stories. The system uses an active. event representation that also seems to provide natural and cognitiveIy motivated solutions to classical problems in logical theories of reasoning about actions. For logical approaches to reasoning about actions, we suggest, that looking at story understanding sets up fairly strong desiderata both in terms of the fine-grained event and action distinctions and the kinds of real-time inferences required. | Small cache, big effect: provable load balancing for randomly partitioned cluster services Load balancing requests across a cluster of back-end servers is critical for avoiding performance bottlenecks and meeting service-level objectives (SLOs) in large-scale cloud computing services. This paper shows how a small, fast popularity-based front-end cache can ensure load balancing for an important class of such services; furthermore, we prove an O(n log n) lower-bound on the necessary cache size and show that this size depends only on the total number of back-end nodes n, not the number of items stored in the system. We validate our analysis through simulation and empirical results running a key-value storage system on an 85-node cluster. | Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples. | 1.070584 | 0.069827 | 0.066667 | 0.035967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |