Query Text
stringlengths
9
8.71k
Ranking 1
stringlengths
14
5.31k
Ranking 2
stringlengths
11
5.31k
Ranking 3
stringlengths
11
8.42k
Ranking 4
stringlengths
17
8.71k
Ranking 5
stringlengths
14
4.95k
Ranking 6
stringlengths
14
8.42k
Ranking 7
stringlengths
17
8.42k
Ranking 8
stringlengths
10
5.31k
Ranking 9
stringlengths
9
8.42k
Ranking 10
stringlengths
9
8.42k
Ranking 11
stringlengths
10
4.11k
Ranking 12
stringlengths
14
8.33k
Ranking 13
stringlengths
17
3.82k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.1
score_8
float64
0
0.02
score_9
float64
0
0
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Evolving mach 3.0 to a migrating thread model We have modified Mach 3.0 to treat cross-domain remote procedure call (RPC) as a single entity, instead of a sequence of message passing operations. With RPC thus elevated, we improved the transfer of control during RPC by changing the thread model. Like most operating systems, Mach views threads as statically associated with a single task, with two threads involved in an RPC. An alternate model is that of migrating threads, in which, during RPC, a single thread abstraction moves between tasks with the logical flow of control, and "server" code is passively executed. We have compatibly replaced Mach's static threads with migrating threads, in an attempt to isolate this aspect of operating system design and implementation. The key element of our design is a decoupling of the thread abstraction into the execution context and the schedulable thread of control, consisting of a chain of contexts. A key element of our implementation is that threads are now "based" in the kernel, and temporarily make excursions into tasks via upcalls. The new system provides more precisely defined semantics for thread manipulation and additional control operations, allows scheduling and accounting attributes to follow threads, simplifies kernel code, and improves RPC performance. We have retained the old thread and IPC interfaces for backwards compatibility, with no changes required to existing client programs and only a minimal change to servers, as demonstrated by a functional Unix single server and clients. The logical complexity along the critical RPC path has been reduced by a factor of nine. Local RPC, doing normal marshaling, has sped up by factors of 1.7-3.4. We conclude that a migrating-thread model is superior to a static model, that kernel-visible RPC is a prerequisite for this improvement, and that it is feasible to improve existing operating systems in this manner.
A feedback-driven proportion allocator for real-rate scheduling In this paper we propose changing the decades-old practice of allocating CPU to threads based on priority to a scheme based on proportion and period. Our scheme allocates to each thread a percentage of CPU cycles over a period of time, and uses a feedback-based adaptive scheduler to assign automatically both proportion and period. Applications with known requirements, such as isochronous software devices, can bypass the adaptive scheduler by specifying their desired proportion and/or period. As a result, our scheme provides reservations to applications that need them, and the benefits of proportion. and period to chose that do not. Adaptive scheduling using proportion and period has several distinct benefits over either fixed or adaptive priority based schemes: finer grain control of allocation, lower variance in the amount of cycles allocated to a thread, and avoidance of accidental priority inversion and starvation, including defense against denial-of-service attacks. This paper describes our design of an adaptive controller and proportion-period scheduler its implementation in Linux, and presents experimental validation of our approach.
On Periodic Resource Scheduling for Continuous Media Databases The Enhanced Pay-Per-View (EPPV) model for providing continuous-media services associates with each continuous-media clip a display frequency that depends on the clip's popularity. The aim is to increase the number of clients that con be serviced concurrently beyond the capacity limitations of available resources, while guaranteeing a constraint on the response time. This is achieved by sharing periodic continuous-media streams among multiple clients. The EPPV model offers a number of advantages over other data-sharing schemes (e.g., hatching), which make it more attractive to large-scale service providers. In this paper, we provide a comprehensive study of the resource-scheduling problems associated with supporting EPPV for continuous-media clips with (possibly) different display rates, frequencies, and lengths. Our main objective is to maximize the amount of dial; bandwidth that is effectively scheduled under the given data layout and storage constraints. Our formulation gives rise to. (sic)P-hard combinatorial optimization problems that fall within the realm of hard real-time scheduling theory. Given the intractability of the problems, we propose novel heuristic solutions with polynomial-time complexity. We also present preliminary experimental results for the average case behavior of the proposed scheduling schemes and examine how they compare to each other under different workloads. A major contribution of our work is the introduction of a robust scheduling framework that, we believe, can provide solutions for a variety of realistic EPPV resource scheduling scenarios, as well as any scheduling problem involving regular, periodic use of a shared resource. Based on this framework, we propose various interesting research directions for extending the results presented in this paper.
The Tiger Shark file system Tiger Shark is a parallel file system for IBM's AIX operating system. It is designed to support interactive multimedia, particularly large-scale systems such as interactive television (ITV). Tiger Shark scales across the entire RS/6000 product line, from small desktop machines to the SP-2 parallel supercomputer. Tiger Shark's primary features are support for continuous time data, scalability, high availability, and manageability, all of which are crucial in its role in large-scale video servers. Interestingly, most of the features that make Tiger Shark a good video server are important for other large-scale applications such as technical computing, data mining, digital library, and scalable network file servers. This paper briefly describes Tiger Shark: the environment that makes it important, the key technology it embodies, and the efforts to build products based on it.
Distributed schedule management in the Tiger video fileserver Tiger is a scalable, fault-tolerant video file server constructed from a collection of computers connected by a switclied network., All content files are striped across all of the computers and disks in a Tiger system. In order to prevent conflicts for a particular resource between two viewers, Tiger schedules viewers so that ihey do not require access to the same resource at the same time. In the abstract, there is a single, global schedule that describes all of the viewers in the system. In practice, the schedule is distrib&d among all of the computers in the system, each of which has a possibly partially inconsistent view of a subset of the schedule. By using such a relaxed consistency model for the schedule, Tiger achieves scalability and fault tolerance while still providing $e consistent, coordinated service required by viewers.
Operating system support for multimedia systems Distributed multimedia applications will be an important part of tomorrow's application mix and require appropriate operating system (OS) support. Neither hard real-time solutions nor best-effort solutions are directly well suited for this support. One reason is the co-existence of real-time and best effort requirements in future systems. Another reason is that the requirements of multimedia applications are not easily predictable, like variable bit rate coded video data and user interactivity. In this article, we present a survey of new developments in OS support for (distributed) multimedia systems, which include: (1) development of new CPU and disk scheduling mechanisms that combine real-time and best effort in integrated solutions; (2) provision of mechanisms to dynamically adapt resource reservations to current needs; (3) establishment of new system abstractions for resource ownership to account more accurate resource consumption; (4) development of new file system structures; (5) introduction of memory management mechanisms that utilize knowledge about application behavior; (6) reduction of major performance bottlenecks, like copy operations in I/O subsystems; and (7) user-level control of resources including communication.
Disk cache—miss ratio analysis and design considerations The current trend of computer system technology is toward CPUs with rapidly increasing processing power and toward disk drives of rapidly increasing density, but with disk performance increasing very slowly if at all. The implication of these trends is that at some point the processing power of computer systems will be limited by the throughput of the input/output (I/O) system.A solution to this problem, which is described and evaluated in this paper, is disk cache. The idea is to buffer recently used portions of the disk address space in electronic storage. Empirically, it is shown that a large (e.g., 80-90 percent) fraction of all I/O requests are captured by a cache of an 8-Mbyte order-of-magnitude size for our workload sample. This paper considers a number of design parameters for such a cache (called cache disk or disk cache), including those that can be examined experimentally (cache location, cache size, migration algorithms, block sizes, etc.) and others (access time, bandwidth, multipathing, technology, consistency, error recovery, etc.) for which we have no relevant data or experiments. Consideration is given to both caches located in the I/O system, as with the storage controller, and those located in the CPU main memory. Experimental results are based on extensive trace-driven simulations using traces taken from three large IBM or IBM-compatible mainframe data processing installations. We find that disk cache is a powerful means of extending the performance limits of high-end computer systems.
A probabilistic limit on the virtual size of replicated disk systems Recently, there has been considerable interest in parallel disk drive systems, in which full or partial replication of the stored data is used for both fault tolerance and enhanced performance. The performance-enhancement derives both from the ability to do parallel reads, and from the reduction of seek time which results from being able to assign a read to whichever drive will produce the shortest seek. Although earlier work implied that for a k-drive system, mean seek distance for read converges to 0 as k to alpha , a refined analysis is presented which shows that this limit is actually nonzero. It is further shown that the system behaves probabilistically as if k were small, no matter how large the physical value of k is.
Fine-Grained Mobility in the Emerald System (Extended Abstract)
Large-scale sorting in parallel memories (extended abstract) We present several algorithms for sort- ing efficiently with parallel two-level and multilevel memories. Our main result is an elegant, easy-to- implement, optimal, deterministic algorithm for exter- nal sorting with P disk drives. This result answers the open problem posed by Vitter and Shriver. Our measure of performance is the number of parallel in- put/output (1/0) operations, in which each of the P disks can simultaneously transfer a block of B contigu- ous records. Our optimal algorithm is deterministic, and thus it improves upon the optimal randomized al- gorithm of (ViS) as well as the well-known deterministic but nonoptimal technique of disk striping. The second part of the paper broadens our cover- age from two-level memories to more general multilevel memories. In particular we consider the blocked uni- form memory hierarchy (UMH) introduced by Alpern, Carter, and Feig, and its parallelization P-UMH, along with new variants. We give optimal and nearly-optimal algorithms for a wide range of bandwidth degrada- tions, including a parsimonious algorithm for constant bandwidth. We also develop optimal sorting algo- rithms for all bandwidths for other versions of UMH and P-UMH, including natural restrictions we intro-
Equilibria and steering laws for planar formations This paper presents a Lie group setting for the problem of control of formations, as a natural outcome of the analysis of a planar two-vehicle formation control law. The vehicle trajectories are described using the planar Frenet–Serret equations of motion, which capture the evolution of both the vehicle position and orientation for unit-speed motion subject to curvature (steering) control. The set of all possible (relative) equilibria for arbitrary G-invariant curvature controls is described (where G=SE(2) is a symmetry group for the control law), and a global convergence result for the two-vehicle control law is proved. An n-vehicle generalization of the two-vehicle control law is also presented, and the corresponding (relative) equilibria for the n-vehicle problem are characterized. Work is on-going to discover stability and convergence results for the n-vehicle problem.
Complexity of finite-horizon Markov decision process problems Controlled stochastic systems occur in science engineering, manufacturing, social sciences, and many other cntexts. If the systems is modeled as a Markov decision process (MDP) and will run ad infinitum, the optimal control policy can be computed in polynomial time using linear programming. The problems considered here assume that the time that the process will run is finite, and based on the size of the input. There are mny factors that compound the complexity of computing the optimal policy. For instance, there are many factors that compound the complexity of this computation. For instance, if the controller does not have complete information about the state of the system, or if the system is represented in some very succint manner, the optimal policy is provably not computable in time polynomial in the size of the input. We analyze the computational complexity of evaluating policies and of determining whether a sufficiently good policy exists for a MDP, based on a number of confounding factors, including the observability of the system state; the succinctness of the representation; the type of policy; even the number of actions relative to the number of states. In almost every case, we show that the decision problem is complete for some known complexity class. Some of these results are familiar from work by Papadimitriou and Tsitsiklis and others, but some, such as our PL-completeness proofs, are surprising. We include proofs of completeness for natural problems in the as yet little-studied classes NPPP.
Raising a Hardness Result This article presents a technique for proving problems hard for classes of the polynomial hierarchy or for PSPACE. The rationale of this technique is that some problem restrictions are able to simulate existential or universal quantifiers. If this is the case, reductions from Quantified Boolean Formulae (QBF) to these restrictions can be transformed into reductions from QBFs having one more quantifier in the front. This means that a proof of hardness of a problem at level n in the polynomial hierarchy can be split into n separate proofs, which may be simpler than a proof directly showing a reduction from a class of QBFs to the considered problem.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.222798
0.222798
0.222798
0.055731
0.023001
0.002404
0.000226
0.000027
0.000006
0
0
0
0
0
The cascading neural network: building the Internet of Smart Things. Most of the research on deep neural networks so far has been focused on obtaining higher accuracy levels by building increasingly large and deep architectures. Training and evaluating these models is only feasible when large amounts of resources such as processing power and memory are available. Typical applications that could benefit from these models are, however, executed on resource-constrained devices. Mobile devices such as smartphones already use deep learning techniques, but they often have to perform all processing on a remote cloud. We propose a new architecture called a cascading network that is capable of distributing a deep neural network between a local device and the cloud while keeping the required communication network traffic to a minimum. The network begins processing on the constrained device, and only relies on the remote part when the local part does not provide an accurate enough result. The cascading network allows for an early-stopping mechanism during the recall phase of the network. We evaluated our approach in an Internet of Things context where a deep neural network adds intelligence to a large amount of heterogeneous connected devices. This technique enables a whole variety of autonomous systems where sensors, actuators and computing nodes can work together. We show that the cascading architecture allows for a substantial improvement in evaluation speed on constrained devices while the loss in accuracy is kept to a minimum.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Handwritten Mathematical Expression Recognition via Paired Adversarial Learning Recognition of handwritten mathematical expressions (MEs) is an important problem that has wide applications in practice. Handwritten ME recognition is challenging due to the variety of writing styles and ME formats. As a result, recognizers trained by optimizing the traditional supervision loss do not perform satisfactorily. To improve the robustness of the recognizer with respect to writing styles, in this work, we propose a novel paired adversarial learning method to learn semantic-invariant features. Specifically, our proposed model, named PAL-v2, consists of an attention-based recognizer and a discriminator. During training, handwritten MEs and their printed templates are fed into PAL-v2 simultaneously. The attention-based recognizer is trained to learn semantic-invariant features with the guide of the discriminator. Moreover, we adopt a convolutional decoder to alleviate the vanishing and exploding gradient problems of RNN-based decoder, and further, improve the coverage of decoding with a novel attention method. We conducted extensive experiments on the CROHME dataset to demonstrate the effectiveness of each part of the method and achieved state-of-the-art performance.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On-line learning, reasoning, rule extraction and aggregation in locally optimized evolving fuzzy neural networks Fuzzy neural networks are connectionist systems that facilitate learning from data, reasoning over fuzzy rules, rule insertion, rule extraction, and rule adaptation. The concept of a particular class of fuzzy neural networks, called FuNNs, is further developed in this paper to a new concept of evolving neuro-fuzzy systems (EFuNNs), with respective algorithms for learning, aggregation, rule insertion, rule extraction. EFuNNs operate in an on-line mode and learn incrementally through locally tuned elements. They grow as data arrive, and regularly shrink through pruning of nodes, or through node aggregation. The aggregation procedure is functionally equivalent to knowledge abstraction. EFuNNs are several orders of magnitude faster than FuNNs and other traditional connectionist models. Their features are illustrated on a bench-mark data set. EFuNNs are suitable for fast learning of on-line incoming data (e.g., financial time series, biological process control), adaptive learning of speech and video data, incremental learning and knowledge discovery from large databases (e.g., in Bioinformatics), on-line tracing processes over time, life-long learning. The paper includes also a short review of the most common types of rules used in the knowledge-based neural networks.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Erasure Code Replication Revisited Erasure coding is a technique for achieving high availability and reliability in storage and communication systems. In this paper, we revisit the analysis of erasure code replication and point out some situations when whole-file replication is preferred. The switchover point (from preferring whole-file replication to erasure code replication) is studied, and characterized using asymptotic analysis. We also discuss the additional considerations in building erasure code replication systems.
Internet-Scale Storage Systems under Churn -- A Study of the Steady-State using Markov Models Content storage in a distributed collaborative environment uses redundancy for better resilience and thus provides good availability and durability. In a peer-to-peer environment, where peers continuously leave and rejoin the network, various lazy strategies can be employed to maintain a minimal redundancy of stored content in the system. Existing static resilience analyses fail to capture in detail the system's behavior over time, particularly the probability mass function of the actual available redundancy, since it ignores the crucial interplay between churn and maintenance operations, and looks only at the average system property. We perform a Markovian time-evolution analysis of the system specified by probability mass function of each possible system state, and establish that given a fixed rate of churn and a specific maintenance strategy, the system operates in a corresponding steady-state (dynamic equilibrium). Understanding the behavior of the system under such a dynamic equilibrium is a fundamental ingredient to precisely evaluate analytically the system's performance and availability as well as to determine the required operational maintenance cost. We also propose a new randomized variant of a lazy-maintenance scheme which has significant performance benefits in comparison to the existing deterministic procrastination based maintenance. We demonstrate the use of our analysis methodology in comparing performance of maintenance schemes using the examples of the new maintenance scheme we propose and the erstwhile best known existing lazy maintenance scheme. The comparative study shows that our randomized lazy maintenance strategy has substantially better resilience at same maintenance cost.
On the Impact of Replica Placement to the Reliability of Distributed Brick Storage Systems Data reliability of distributed brick storage systems critically depends on the replica placement policy, and the two governing forces are repair speed and sensitivity to multiple concurrent failures. In this paper, the authors provided an analytical framework to reason and quantify the impact of replica placement policy to system reliability. The novelty of the framework is its consideration of the bounded network bandwidth for data maintenance. The framework was applied to two popular schemes, namely sequential placement and random placement, and showed that both have drawbacks that significantly degrade data reliability. Then the stripe placement scheme was proposed and find the near-optimal configuration parameter such that it provides much better reliability. The possibility of addressing the problem of correlated brick failures in the analytical framework was further discussed
Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you? Component failure in large-scale IT installations is becoming an ever larger problem as the number of components in a single cluster approaches a million. In this paper, we present and analyze field-gathered disk replacement data from a number of large production systems, including high-performance computing sites and internet services sites. About 100,000 disks are covered by this data, some for an entire lifetime of five years. The data include drives with SCSI and FC, as well as SATA interfaces. The mean time to failure (MTTF) of those drives, as specified in their datasheets, ranges from 1,000,000 to 1,500,000 hours, suggesting a nominal annual failure rate of at most 0.88%. We find that in the field, annual disk replacement rates typically exceed 1%, with 2-4% common and up to 13% observed on some systems. This suggests that field replacement is a fairly different process than one might predict based on datasheet MTTF. We also find evidence, based on records of disk replacements in the field, that failure rate is not constant with age, and that, rather than a significant infant mortality effect, we see a significant early onset of wearout degradation. That is, replacement rates in our data grew constantly with age, an effect often assumed not to set in until after a nominal lifetime of 5 years. Interestingly, we observe little difference in replacement rates between SCSI, FC and SATA drives, potentially an indication that disk-independent factors, such as operating conditions, affect replacement rates more than component specific factors. On the other hand, we see only one instance of a customer rejecting an entire population of disks as a bad batch, in this case because of media error rates, and this instance involved SATA disks. Time between replacement, a proxy for time between failure, is not well modeled by an exponential distribution and exhibits significant levels of correlation, including autocorrelation and long-range dependence.
Logic programs with classical negation
Logic programming and knowledge representation In this paper, we review recent work aimed at the application of declarative logic programming to knowledge representation in artificial intelligence. We consider extensions of the language of definite logic programs by classical (strong) negation, disjunction, and some modal operators and show how each of the added features extends the representational power of the language.
Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented.
Efficient sparse coding algorithms Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that cap- ture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimiza- tion problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field sur- round suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.
Synchronized Disk Interleaving A group of disks may be interleaved to speed up data transfers in a manner analogous to the speedup achieved by main memory interleaving. Conventional disks may be used for interleaving by spreading data across disks and by treating multiple disks as if they were a single one. Furthermore, the rotation of the interleaved disks may be synchronized to simplify control and also to optimize performance. In addition, check- sums may be placed on separate check-sum disks in order to improve reliability. In this paper, we study synchronized disk interleaving as a high-performance mass storage system architecture. The advantages and limitations of the proposed disk interleaving scheme are analyzed using the M/G/1 queueing model and compared to the conventional disk access mechanism.
Downward Separation Fails Catastrophically for Limited Nondeterminism Classes The $\beta$ hierarchy consists of classes $\beta_k={\rm NP}[logkn]\subseteq {\rm NP}$. Unlike collapses in the polynomial hierarchy and the Boolean hierarchy, collapses in the $\beta$ hierarchy do not seem to translate up, nor does closure under complement seem to cause the hierarchy to collapse. For any consistent set of collapses and separations of levels of the hierarchy that respects ${\rm P} = \beta_1\subseteq \beta_2\subseteq \cdots \subseteq {\rm NP}$, we can construct an oracle relative to which those collapses and separations hold; at the same time we can make distinct levels of the hierarchy closed under computation or not, as we wish. To give two relatively tame examples: for any $k \geq 1$, we construct an oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} \neq \beta_{k+2} \neq \cdots \] and another oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} = {\rm PSPACE}. \] We also construct an oracle relative to which $\beta_{2k} = \beta_{2k+1} \neq \beta_{2k+2}$ for all k.
Normal forms for answer sets programming Normal forms for logic programs under stable/answer set semantics are introduced. We argue that these forms can simplify the study of program properties, mainly consistency. The first normal form, called the kernel of the program, is useful for studying existence and number of answer sets. A kernel program is composed of the atoms which are undefined in the Well-founded semantics, which are those that directly affect the existence of answer sets. The body of rules is composed of negative literals only. Thus, the kernel form tends to be significantly more compact than other formulations. Also, it is possible to check consistency of kernel programs in terms of colorings of the Extended Dependency Graph program representation which we previously developed. The second normal form is called 3-kernel. A 3-kernel program is composed of the atoms which are undefined in the Well-founded semantics. Rules in 3-kernel programs have at most two conditions, and each rule either belongs to a cycle, or defines a connection between cycles. 3-kernel programs may have positive conditions. The 3-kernel normal form is very useful for the static analysis of program consistency, i.e. the syntactic characterization of existence of answer sets. This result can be obtained thanks to a novel graph-like representation of programs, called Cycle Graph which presented in the companion article Costantini (2004b).
iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.1
0.05
0.002857
0
0
0
0
0
0
0
0
0
0
A Requirements Analysis for Parallel KDD Systems The current generation of data mining tools have limited capacity and performance, since these tools tend to be sequential. This paper explores a migration path out of this bottleneck by considering an in tegrated hardware and softw are approach to parallelize data mining. Our analysis shows that parallel data mining solutions require the following components: parallel data mining algorithms, parallel and distributed data bases, parallel file systems, parallel I/O, tertiary storage, management of online data, support for heterogeneous data representations, security, quality of service and pricing metrics. State of the art technology in these areas is surveyed with an eye towards an integration strategy leading to a complete solution.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
FPGA-based hardware acceleration for local complexity analysis of massive genomic data While genomics have significantly advanced modern biological achievements, it requires extensive computational power, traditionally employed on large-scale cluster machines as well as multi-core systems. However, emerging research results show that FPGA-based acceleration of algorithms for genomic applications greatly improves the performance and energy efficiency when compared to multi-core systems and clusters. In this work, we present a parallel, hardware acceleration architecture of the CAST (Complexity Analysis of Sequence Tracts) algorithm, employed by biologists for complexity analysis of protein sequences encoded in genomic data. CAST is used for detecting (and subsequently masking) low-complexity regions (LCRs) in protein sequences. We designed and implemented the CAST accelerator architecture and built an FPGA prototype, with the purpose of benchmarking its performance against serial and multithreaded implementations of the CAST algorithm in software. The proposed architecture achieves remarkable speedup compared to both serial and multithreaded software CAST implementations ranging from approx. 100x-5000x, depending on the system configuration and the dataset features, such as low-complexity content and sequence length distribution. Such performance may enable complex analyses of voluminous sequence datasets, and has the potential to interoperate with other hardware architectures for protein sequence analysis.
Single Pass, BLAST-Like, Approximate String Matching on FPGAs Approximate string matching is funda- mental to bioinformatics, and has been the subject of numerous FPGA acceleration studies. We ad- dress issues with respect to FPGA implementations of both BLAST- and dynamic-programming- (DP) based methods. Our primary contributions are two new algo- rithms for emulating the seeding and extension phases of BLAST. These operate in a single pass through a database at streaming rate (110 Maa/sec on a VP70 for query sizes up to 600 and 170 Maa/sec on a Virtex4 for query sizes up to 1024), and with no preprocessing other than loading the query string. Further, they use very high sensitivity with no slowdown. While cur- rent DP-based methods also operate at streaming rate, generating results can be cumbersome. We address this with a new structure for data extraction. We present results from several implementations.
A General Reconfigurable Architecture for the BLAST Algorithm The process of DNA sequence matching and database search is one of the major problems of the bioinformatics community. Major scientific efforts to address this problem have provided algorithms and software tools for molecular biologists since the early 1970s. At the algorithmic and software level BLAST is by far the most popular tool. It has been developed and continues to be maintained and distributed by the NCBI organization. The BLAST algorithm and software is computationally very intensive and as a result several computer vendors use it as a benchmark. On the other hand no systematic approach for hardware speedup of BLAST and its variants for different query and database size has been reported to date. In this paper we present our architecture that implements the BLAST algorithm for all of its major versions, and for any size of database and query. The system has been fully designed and partially implemented with reconfigurable logic. It consists of software and hardware parts and achieves a speedup of several times up to thousands of times vs general purpose computers.
Biosequence Similarity Search on the Mercury System Biosequence similarity search is an important application in modern molecular biology. Search algorithms aim to identify sets of sequences whose extensional similarity suggests a common evolutionary origin or function. The most widely used similarity search tool for biosequences is BLAST, a program designed to compare query sequences to a database. Here, we present the design of BLASTN, the version of BLAST that searches DNA sequences, on the Mercury system, an architecture that supports high-volume, high-throughput data movement off a data store and into reconfigurable hardware. An important component of application deployment on the Mercury system is the functional decomposition of the application onto both the reconfigurable hardware and the traditional processor. Both the Mercury BLASTN application design and its performance analysis are described.
A Run-Time Reconfigurable System for Gene-Sequence Searching Advances in the field of bio-technology has led to anever increasing demand for computational resourcesto rapidly search large databases of genetic information.Databases with billions of data elements are routinelycompared and searched for matching and near-matchingpatterns. In this paper we present a systemdeveloped to search DNA sequence data using runtimereconfiguration of Field Programmable Gate Arrays(FPGAs). The system provides an order of magnitudeincrease in performance while reducing hardwarecomplexity when compared to existing commercial systems.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
The design of POSTGRES This paper presents the preliminary design of a new database management system, called POSTGRES, that is the successor to the INGRES relational database system. The main design goals of the new system are toprovide better support for complex objects,provide user extendibility for data types, operators and access methods,provide facilities for active databases (i.e., alerters and triggers) and inferencing including forward- and backward-chaining,simplify the DBMS code for crash recovery,produce a design that can take advantage of optical disks, workstations composed of multiple tightly-coupled processors, and custom designed VLSI chips, andmake as few changes as possible (preferably none) to the relational model.The paper describes the query language, programming language interface, system architecture, query processing strategy, and storage system for the new system.
A theory of diagnosis from first principles Without Abstract
Restricted Monotonicity A knowledge representation problem can be sometimesviewed as an element of a family of problems,with parameters corresponding to possibleassumptions about the domain under consideration.When additional assumptions are made,the class of domains that are being described becomessmaller, so that the class of conclusions thatare true in all the domains becomes larger. Asa result, a satisfactory solution to a parametricknowledge representation problem on the basis ofsome nonmonotonic...
Complexity of Power Default Reasoning This paper derives a new and surprisingly low complexity result for inference in a new form of Reiter's propositional default logic. The problem studied here is the "default inference problem" whose fundamental importance was pointed out by Kraus, Lehmann, and Magidor. We prove that ``normal'' default inference, in propositional logic, is a problem complete for co-NP(3), the third level of the so-called Boolean hierarchy. Our result (by changing the underlying semantics) contrasts favorably with a similar result of Gottlob, who proves that standard default inference is complete for the second level of the polynomial hierarchy. Our inference relation also obeys all of the laws for preferential consequence relations set forth by Kraus, Lehmann, and Magidor. In particular, we get the property of being able to reason by cases and the law of cautious monotony. Both of these laws fail for standard propositional default logic.The key technique for our results is the use of Scott's domain theory to integrate defaults into partial model theory of the logic, instead of keeping defaults as quasi-proof rules in the syntax. In particular, reasoning disjunctively entails using the Smyth powerdomain.
Formations of vehicles in cyclic pursuit Abstract—Inspired by the so-called “bugs” problem from mathematics, we study the geometric formations of multivehicle systems under cyclic pursuit. First, we introduce the notion of cyclic pursuit by examining a system of identical linear agents in the plane. This idea is then extended to a system of wheeled vehicles, each subject to a single nonholonomic constraint (i.e., unicycles), which is the principal focus of this paper. The pursuit framework is particularly simple in that the,identical vehicles are ordered such that vehicle pursues vehicle modulo . In this paper, we assume each vehicle has the same constant forward speed. We show that the system’s equilibrium formations are generalized regular polygons and it is exposed how the multivehicle system’s global behavior can be shaped through appropriate controller gain assignments. We then study the local stability of these equilibrium polygons, revealing which formations are stable and which are not. Index Terms—Circulant matrices, cooperative control, multia-
Striping in a RAID level 5 disk array Redundant disk arrays are an increasingly popular way to improve I/O system performance. Past research has studied how to stripe data in non-redundant (RAID Level 0) disk arrays, but none has yet been done on how to stripe data in redundant disk arrays such as RAID Level 5, or on how the choice of striping unit varies with the number of disks. Using synthetic workloads, we derive simple design rules for striping data in RAID Level 5 disk arrays given varying amounts of workload information. We then validate the synthetically derived design rules using real workload traces to show that the design rules apply well to real systems.We find no difference in the optimal striping units for RAID Level 0 and 5 for read-intensive workloads. For write-intensive workloads, in contrast, the overhead of maintaining parity causes full-stripe writes (writes that span the entire error-correction group) to be more efficient than read-modify writes or reconstruct writes. This additional factor causes the optimal striping unit for RAID Level 5 to be four times smaller for write-intensive workloads than for read-intensive workloads.We next investigate how the optimal striping unit varies with the number of disks in an array. We find that the optimal striping unit for reads in a RAID Level 5 varies inversely to the number of disks, but that the optimal striping unit for writes varies with the number of disks. Overall, we find that the optimal striping unit for workloads with an unspecified mix of reads and writes is independent of the number of disks.Together, these trends lead us to recommend (in the absence of specific workload information) that the striping unit over a wide range of RAID Level 5 disk array sizes be equal to 1/2 * average positioning time * disk transfer rate.
MAXSAT Heuristics for Cost Optimal Planning.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.24
0.010484
0.006857
0.000622
0.000229
0
0
0
0
0
0
0
0
0
A Case for Continuous Data Protection at Block Level in Disk Array Storages This paper presents a study of data storages for continuous data protection (CDP). After analyzing the existing data protection technologies, we propose a new disk array architecture that provides Timely Recovery to Any Point-in-time, referred to as TRAP. TRAP stores not only the data stripe upon a write to the array but also the time-stamped Exclusive ors (xors) of successive writes to each data block. By leveraging the xor operations that are performed upon each block write in today's RAID4/5 controllers, TRAP does not incur noticeable performance overhead. More importantly, TRAP is able to recover data very quickly to any point-in-time upon data damage by tracing back the sequence and history of xors resulting from writes. What is interesting is that the TRAP architecture is very space efficient. We have implemented a prototype of the new TRAP architecture using software at the block level and carried out extensive performance measurements using TPC-C benchmarks running on Oracle and Postgres databases, TPC-W running on a MySQL database, and file system benchmarks running on Linux and Windows systems. Our experiments demonstrated that TRAP not only is able to recover data to any point-in-time very quickly upon a failure but also uses less storage space than traditional daily incremental backup/snapshot. Compared to the state-of-the-art CDP technologies, TRAP saves disk storage space by one to two orders of magnitude with a simple and a fast encoding algorithm. In addition, TRAP can provide two-way data recovery with the availability of only one reference image in contrast to the one-way recovery of snapshot and incremental backup technologies.
FAST: quick application launch on solid-state drives Application launch performance is of great importance to system platform developers and vendors as it greatly affects the degree of users' satisfaction. The single most effective way to improve application launch performance is to replace a hard disk drive (HDD) with a solid state drive (SSD), which has recently become affordable and popular. A natural question is then whether or not to replace the traditional HDD-aware application launchers with a new SSD-aware optimizer. We address this question by analyzing the inefficiency of the HDD-aware application launchers on SSDs and then proposing a new SSD-aware application prefetching scheme, called the Fast Application STarter (FAST). The key idea of FAST is to overlap the computation (CPU) time with the SSD access (I/O) time during an application launch. FAST is composed of a set of user-level components and system debugging tools provided by the Linux OS (operating system). In addition, FAST uses a system-call wrapper to automatically detect application launches. Hence, FAST can be easily deployed in any recent Linux versions without kernel recompilation. We implemented FAST on a desktop PC with a SSD running Linux 2.6.32 OS and evaluated it by launching a set of widely-used applications, demonstrating an average of 28% reduction of application launch time as compared to PC without a prefetcher.
Logging RAID - An Approach to Fast, Reliable, and Low-Cost Disk Arrays Parity-based disk arrays provide high reliability and high performance for read and large write accesses at low storage cost. However, small writes are notoriously slow due to the well-known read-modify-write problem. This paper presents logging RAID, a disk array architecture that adopts data logging techniques to overcome the small-write problem in parity-based disk arrays. Logging RAID achieves high performance for a wide variety of I/O access patterns with very small disk space overhead. We show this through trace-driven simulations.
Harmonia: A globally coordinated garbage collector for arrays of Solid-State Drives Solid-State Drives (SSDs) offer significant performance improvements over hard disk drives (HDD) on a number of workloads. The frequency of garbage collection (GC) activity is directly correlated with the pattern, frequency, and volume of write requests, and scheduling of GC is controlled by logic internal to the SSD. SSDs can exhibit significant performance degradations when garbage collection (GC) conflicts with an ongoing I/O request stream. When using SSDs in a RAID array, the lack of coordination of the local GC processes amplifies these performance degradations. No RAID controller or SSD available today has the technology to overcome this limitation. This paper presents Harmonia, a Global Garbage Collection (GGC) mechanism to improve response times and reduce performance variability for a RAID array of SSDs. Our proposal includes a high-level design of SSD-aware RAID controller and GGC-capable SSD devices, as well as algorithms to coordinate the global GC cycles. Our simulations show that this design improves response time and reduces performance variability for a wide variety of enterprise workloads. For bursty, write dominant workloads response time was improved by 69% while performance variability was reduced by 71%.
Hystor: making the best use of solid state drives in high performance storage systems With the fast technical improvement, flash memory based Solid State Drives (SSDs) are becoming an important part of the computer storage hierarchy to significantly improve performance and energy efficiency. However, due to its relatively high price and low capacity, a major system research issue to address is on how to make SSDs play their most effective roles in a high-performance storage system in cost- and performance-effective ways. In this paper, we will answer several related questions with insights based on the design and implementation of a high performance hybrid storage system, called Hystor. We make the best use of SSDs in storage systems by achieving a set of optimization objectives from both system deployment and algorithm design perspectives. Hystor manages both SSDs and hard disk drives (HDDs) as one single block device with minimal changes to existing OS kernels. By monitoring I/O access patterns at runtime, Hystor can effectively identify blocks that (1) can result in long latencies or (2) are semantically critical (e.g. file system metadata), and stores them in SSDs for future accesses to achieve a significant performance improvement. In order to further leverage the exceptionally high performance of writes in the state-of-the-art SSDs, Hystor also serves as a write-back buffer to speed up write requests. Our measurements on Hystor implemented in the Linux kernel 2.6.25.8 show that it can take advantage of the performance merits of SSDs with only a few lines of changes to the stock Linux kernel. Our system study shows that in a highly effective hybrid storage system, SSDs should play a major role as an independent storage where the best suitable data are adaptively and timely migrated in and retained, and it can also be effective to serve as a write-back buffer.
The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms A fundamental challenge in improving file system performance is to design effective block replacement algorithms to minimize buffer cache misses. Despite the well-known interactions between prefetching and caching, almost all buffer cache replacement algorithms have been proposed and studied comparatively, without taking into account file system prefetching, which exists in all modern operating systems. This paper shows that such kernel prefetching can have a significant impact on the relative performance in terms of the number of actual disk I/Os of many well-known replacement algorithms; it can not only narrow the performance gap but also change the relative performance benefits of different algorithms. Moreover, since prefetching can increase the number of blocks clustered for each disk I/O and, hence, the time to complete the I/O, the reduction in the number of disk I/Os may not translate into proportional reduction in the total I/O time. These results demonstrate the importance of buffer caching research taking file system prefetching into consideration and comparing the actual disk I/Os and the execution time under different replacement algorithms.
Flexible buffer allocation based on marginal gazns Previous works on buflcx allocation are based f~il$lwr exclusively on the availability of buffers at r{ll)timc or on the access pat t eras of queries. In this paper We p repose a unified approach for buffer allocation in which both of these considerations are taken into accou at. Our approach is based on the notion of marginal y~~ins which specify the expected reduction cm page faults in allocating extra buffers to a query. Simulation results show that our approach is promising, and allocation algorithms based on marginal gains perform cousidwably better than existing on’es.
Implementation and performance of integrated application-controlled file caching, prefetching, and disk scheduling As the performance gap between disks and micropocessors continues to increase, effective utilization of the file cache becomes increasingly immportant. Application-controlled file caching and prefetching can apply application-specific knowledge to improve file cache management. However, supporting application-controlled file caching and prefetching is nontrivial because caching and prefetching need to be integrated carefully, and the kernel needs to allocate cache blocks among processes appropriately. This article presents the design, implementation, and performance of a file system that integrates application-controlled caching, prefetching, and disk scheduling. We use a two-level cache management strategy. The kernel uses the LRU-SP (Least-Recently-Used with Swapping and Placeholders) policy to allocate blocks to processes, and each process integrates application-specific caching and prefetching based on the controlled-aggressive policy, an algorithm previously shown in a theoretical sense to be nearly optimal. Each process also improves its disk access latency by submittint its prefetches in batches so that the requests can be scheduled to optimize disk access performance. Our measurements show that this combination of techniques greatly improves the performance of the file system. We measured that the running time is reduced by 3% to 49% (average 26%) for single-process workloads and by 5% to 76% (average 32%) for multiprocess workloads.
Support-Vector Networks The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Inducing causal laws by regular inference Recent work on representing action and change has introduced high-level action languages which describe the effects of actions as causal laws in a declarative way. In this paper, we propose an algorithm to induce the effects of actions from an incomplete domain description and observations after executing action sequences, all of which are represented in the action language $\mathcal{A}$. Our induction algorithm generates effect propositions in $\mathcal{A}$ based on regular inference, i.e., an algorithm to learn finite automata. As opposed to previous work on learning automata from scratch, we are concerned with explanatory induction which accounts for observations from background knowledge together with induced hypotheses. Compared with previous approaches in ILP, an observation input to our induction algorithm is not restricted to a narrative but can be any fact observed after executing a sequence of actions. As a result, induction of causal laws can be formally characterized within action languages.
Computational properties of argument systems satisfying graph-theoretic constraints One difficulty that arises in abstract argument systems is that many natural questions regarding argument acceptability are, in general, computationally intractable having been classified as complete for classes such as np, co-np, and . In consequence, a number of researchers have considered methods for specialising the structure of such systems so as to identify classes for which efficient decision processes exist. In this paper the effect of a number of graph-theoretic restrictions is considered: k-partite systems (k≥2) in which the set of arguments may be partitioned into k sets each of which is conflict-free; systems in which the numbers of attacks originating from and made upon any argument are bounded; planar systems; and, finally, those of k-bounded treewidth. For the class of bipartite graphs, it is shown that determining the acceptability status of a specific argument can be accomplished in polynomial-time under both credulous and sceptical semantics. In addition we establish the existence of polynomial time methods for systems having bounded treewidth when deciding the following: whether a given (set of) arguments is credulously accepted; if the system has a non-empty preferred extension; has a stable extension; is coherent; has at least one sceptically accepted argument. In contrast to these positive results, however, deciding whether an arbitrary set of arguments is "collectively acceptable" remains NP-complete in bipartite systems. Furthermore for both planar and bounded degree systems the principal decision problems are as hard as the unrestricted cases. In deriving these latter results we introduce various concepts of "simulating" a general argument system by a restricted class so allowing any argument system to be translated to one which has both bounded degree and is planar. Finally, for the development of basic argument systems to so-called "value-based frameworks", we present results indicating that decision problems known to be intractable in their most general form remain so even under quite severe graph-theoretic restrictions. In particular the problem of deciding "subjective acceptability" continues to be NP-complete even when the underlying graph is a binary tree.
On the undecidability of probabilistic planning and related stochastic optimization problems Automated planning, the problem of how an agent achieves a goal given a repertoire of actions, is one of the foundational and most widely studied problems in the AI literature. The original formulation of the problem makes strong assumptions regarding the agent's knowledge and control over the world, namely that its information is complete and correct, and that the results of its actions are deterministic and known. Recent research in planning under uncertainty has endeavored te relax these assumptions, providing formal and computation models wherein the agent has incomplete or noisy information about the world and has noisy sensors and effectors. This research has mainly taken one of two approaches: extend the classical planning paradigm to a semantics that admits uncertainty, or adopt another framework for approaching the problem, most commonly the Markov Decision Process (MDP) model. This paper presents a complexity analysis of planning under uncertainty. It begins with the "probabilistic classical planning" problem, showing that problem to be formally undecidable. This fundamental result is then applied to a broad class of stochastic optimization problems, in brief any problem statement where the agent (a) operates over an infinite or indefinite time horizon, and (b) has available only probabilistic information about the system's state. Undecidability is established for policy-existence problems for partially observable infinite-horizon Markov decision processes under discounted and undiscounted total reward models, average-reward models, and state-avoidance models. The results also apply to corresponding approximation problems with undiscounted objective functions. The paper answers a significant open question raised by Papadimitriou and Tsitsiklis [Math. Oper. Res. 12 (3) (1987) 441-450] about the complexity of infinite horizon POMDPs.
On the Complexity of Plan Adaptation by Derivational Analogy in a Universal Classical Planning Framework In this paper we present an algorithm called DerUCP, which can be regarded as a general model for plan adaptation using Derivational Analogy. Using DerUCP, we show that previous results on the complexity of plan adaptation do not apply to Derivational Analogy. We also show that Derivational Analogy can potentially produce exponential reductions in the size of the search space generated by a planning system.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.11
0.033333
0.02
0.02
0.008333
0.001769
0.000027
0.000003
0
0
0
0
0
0
Implementation and Evaluation of File Write-Back and Prefetching for MPI-IO Over GPFS In this paper we present the implementation of an open-source MPI-IO interface for the General Parallel File System (GPFS). Our solution includes the design and implementation of GPFS-based write-back and prefetching modules, which have been integrated in ROMIO. A collective file write strategy based on GPFS data-shipping, and a view-based collective I/O mechanism, relying on GPFS mechanisms, are at the core of the novel optimizations proposed in this paper. View-based collective I/O includes a thread-based flushing method implementing a write-back policy for latency hiding, and a prefetching method, based on GPFS hints, to increase small read access performance. Performance evaluations show that our implementation achieves high-performance and hides the latency of file accesses through the combination of view-based collective file accesses, and the overlapping of computation, communication and I/O. This is especially true for collective and small-size access patterns, which are very frequent in parallel scientific applications.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Implicit Density Estimation by Local Moment Matching to Sample from Auto-Encoders Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of the unknown data generating density. This paper contributes to the mathematical understanding of this phenomenon and helps define better justified sampling algorithms for deep learning based on auto-encoder variants. We consider an MCMC where each step samples from a Gaussian whose mean and covariance matrix depend on the previous state, defines through its asymptotic distribution a target density. First, we show that good choices (in the sense of consistency) for these mean and covariance functions are the local expected value and local covariance under that target density. Then we show that an auto-encoder with a contractive penalty captures estimators of these local moments in its reconstruction function and its Jacobian. A contribution of this work is thus a novel alternative to maximum-likelihood density estimation, which we call local moment matching. It also justifies a recently proposed sampling algorithm for the Contractive Auto-Encoder and extends it to the Denoising Auto-Encoder.
On Autoencoders and Score Matching for Energy Based Models.
Better Mixing via Deep Representations It has previously been hypothesized, and supported with some experimental evidence, that deeper representations, when well trained, tend to do a better job at disentangling the underlying factors of variation. We study the following related conjecture: better representations, in the sense of better disentangling, can be exploited to produce faster-mixing Markov chains. Consequently, mixing would be more efficient at higher levels of representation. To better understand why and how this is happening, we propose a secondary conjecture: the higher-level samples fill more uniformly the space they occupy and the high-density manifolds tend to unfold when represented at higher levels. The paper discusses these hypotheses and tests them experimentally through visualization and measurements of mixing and interpolating between samples.
Differentiable Sparse Coding Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a laplacian (L1) that promotes sparsity. We show how smoother priors can pre- serve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Additionally, we show how to calculate the derivative of the MAP estimate effi- ciently with implicit differentiation. One prior that can be differentiated this way is KL-regularization. We demonstrate its effectiveness on a wide variety of appli- cations, and find that online optimization of the parameters of the KL-regularized model can significantly improve prediction performance.
The Manifold Tangent Classifier. We combine three important ideas present in previous work for building classi- fiers: the semi-supervised hypothesis (the input distribution contains information about the classifier), the unsupervised manifold hypothesis (data density concen- trates near low-dimensional manifolds), and the manifold hypothesis for classifi- cation (different classes correspond to disjoint manifolds separated by low den- sity). We exploit a novel algorithm for capturing manifold structure (high-order contractive auto-encoders) and we show how it builds a topological atlas of charts, each chart being characterized by the principal singular vectors of the Jacobian of a representation mapping. This representation learning algorithm can be stacked to yield a deep architecture, and we combine it with a domain knowledge-free version of the TangentProp algorithm to encourage the classifier to be insensitive to local directions changes along the manifold. Record-breaking classification results are obtained.
Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives
Dependency networks for inference, collaborative filtering, and data visualization We describe a graphical model for probabilistic relationships--an alternative to the Bayesian network--called a dependency network. The graph of a dependency network, unlike a Bayesian network, is potentially cyclic. The probability component of a dependency network, like a Bayesian network, is a set of conditional distributions, one for each node given its parents. We identify several basic properties of this representation and describe a computationally efficient procedure for learning the graph and probability components from data. We describe the application of this representation to probabilistic inference, collaborative filtering (the task of predicting preferences), and the visualization of acausal predictive relationships.
3D object understanding with 3D Convolutional Neural Networks Feature engineering plays an important role in object understanding. Expressive discriminative features can guarantee the success of object understanding tasks. With remarkable ability of data abstraction, deep hierarchy architecture has the potential to represent objects. For 3D objects with multiple views, the existing deep learning methods can not handle all the views with high quality. In this paper, we propose a 3D convolutional neural network, a deep hierarchy model which has a similar structure with convolutional neural network. We employ stochastic gradient descent (SGD) method to pretrain the convolutional layer, and then a back-propagation method is proposed to fine-tune the whole network. Finally, we use the result of the two phases for 3D object retrieval. The proposed method is shown to out-perform the state-of-the-art approaches by experiments conducted on publicly available 3D object datasets.
Global Data Analysis and the Fragmentation Problem in Decision Tree Induction We investigate an inherent limitation of top-down decision tree induction in which the continuous partitioning of the instance space progressively lessens the statistical support of every partial (i.e. disjunctive) hypothesis, known as the fragmentation problem. We show, both theoretically and empirically, how the fragmentation problem adversely affects predictive accuracy as variation (a measure of concept difficulty) increases. Applying feature-construction techniques at every tree node, which we implement on a decision tree inducer DALI, is proved to only partially solve the fragmentation problem. Our study illustrates how a more robust solution must also assess the value of each partial hypothesis by recurring to all available training data, an approach we name global data analysis, which decision tree induction alone is unable to accomplish. The value of global data analysis is evaluated by comparing modified versions of C4.5 rules with C4.5 trees and DALI, on both artificial and real-world domains. Empirical results suggest the importance of combining both feature construction and global data analysis to solve the fragmentation problem.
Learning with local and global consistency We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data.
A fast file system for UNIX
Updating action domain descriptions. Incorporating new information into a knowledge base is an important problem which has been widely investigated. In this paper, we study this problem in a formal framework for reasoning about actions and change. In this framework, action domains are described in an action language whose semantics is based on the notion of causality. Unlike the formalisms considered in the related work, this language allows straightforward representation of non-deterministic effects and indirect effects of (possibly concurrent) actions, as well as state constraints; therefore, the updates can be more general than elementary statements. The expressivity of this formalism allows us to study the update of an action domain description with a more general approach compared to related work. First of all, we consider the update of an action description with respect to further criteria, for instance, by ensuring that the updated description entails some observations, assertions, or general domain properties that constitute further constraints that are not expressible in an action description in general. Moreover, our framework allows us to discriminate amongst alternative updates of action domain descriptions and to single out a most preferable one, based on a given preference relation possibly dependent on the specified criteria. We study semantic and computational aspects of the update problem, and establish basic properties of updates as well as a decomposition theorem that gives rise to a divide and conquer approach to updating action descriptions under certain conditions. Furthermore, we study the computational complexity of decision problems around computing solutions, both for the generic setting and for two particular preference relations, viz. set-inclusion and weight-based preference. While deciding the existence of solutions and recognizing solutions are PSPACE-complete problems in general, the problems fall back into the polynomial hierarchy under restrictions on the additional constraints. We finally discuss methods to compute solutions and approximate solutions (which disregard preference). Our results provide a semantic and computational basis for developing systems that incorporate new information into action domain descriptions in an action language, in the presence of additional constraints.
The pitfalls of deploying solid-state drive RAIDs Solid-State Drives (SSDs) are about to radically change the way we look at storage systems. Without moving mechanical parts, they have the potential to supplement or even replace hard disks in performance-critical applications in the near future. Storage systems applied in such settings are usually built using RAIDs consisting of a bunch of individual drives for both performance and reliability reasons. Most existing work on SSDs, however, deals with the architecture at system level, the ash translation layer (FTL), and their influence on the overall performance of a single SSD device. Therefore, it is currently largely unclear whether RAIDs of SSDs exhibit different performance and reliability characteristics than those comprising hard disks and to which issues we have to pay special attention to ensure optimal operation in terms of performance and reliability. In this paper, we present a detailed analysis of SSD RAID configuration issues and derive several pitfalls for deploying SSDs in common RAID level configurations that can lead to severe performance degradation. After presenting potential solutions for each of these pitfalls, we concentrate on the particular challenge that SSDs can suffer from bad random write performance. We identify that over-provisioning offers a potential solution to this problem and validate the effectiveness of over-provisioning in common RAID level configurations by experiments whose results are compared to those of an analytical model that allows to approximately predict the random write performance of SSD RAIDs based on the characteristics of a single SSD. Our results show that over-provisioning is indeed an effective method that can increase random write performance in SSD RAIDs by more than an order of magnitude eliminating the potential Achilles heel of SSD-based storage systems.
A multiscale two-point flux-approximation method large number of multiscale finite-volume methods have been developed over the past decade to compute conservative approximations to multiphase flow problems in heterogeneous porous media. In particular, several iterative and algebraic multiscale frameworks that seek to reduce the fine-scale residual towards machine precision have been presented. Common for all such methods is that they rely on a compatible primal-dual coarse partition, which makes it challenging to extend them to stratigraphic and unstructured grids. Herein, we propose a general idea for how one can formulate multiscale finite-volume methods using only a primal coarse partition. To this end, we use two key ingredients that are computed numerically: (i) elementary functions that correspond to flow solutions used in transmissibility upscaling, and (ii) partition-of-unity functions used to combine elementary functions into basis functions. We exemplify the idea by deriving a multiscale two-point flux-approximation (MsTPFA) method, which is robust with regards to strong heterogeneities in the permeability field and can easily handle general grids with unstructured fine- and coarse-scale connections. The method can easily be adapted to arbitrary levels of coarsening, and can be used both as a standalone solver and as a preconditioner. Several numerical experiments are presented to demonstrate that the MsTPFA method can be used to solve elliptic pressure problems on a wide variety of geological models in a robust and efficient manner.
1.045476
0.020561
0.017174
0.014443
0.012382
0.00699
0.00437
0.001038
0.000081
0.000018
0
0
0
0
A logic-based calculus of events Formal Logic can be used to represent knowledge of many kinds for many purposes. It can be used to formalize programs, program specifications, databases, legislation, and natural language in general. For many such applications of logic a representation of time is necessary. Although there have been several attempts to formalize the notion of time in classical first-order logic, it is still widely believed that classical logic is not adequate for the representation of time and that some form of non-classical Temporal Logic is needed. In this paper, we shall outline a treatment of time, based on the notion of event, formalized in the Horn clause subset of classical logic augmented with negation as failure. The resulting formalization is executable as a logic program. We use the term ''event calculus'' to relate it to the well-known ''situation calculus'' (McCarthy and Hayes 1969). The main difference between the two is conceptual: the situation calculus deals with global states whereas the event calculus deals with local events and time periods. Like the event calculus, the situation calculus can be formalized by means of Horn clauses augmented with negation by failure (Kowalski 1979). The main intended applications investigated in this paper are the updating of data- bases and narrative understanding. In order to treat both cases uniformly we have taken the view that an update consists of the addition of new knowledge to a knowledge base. The effect of explicit deletion of information in conventional databases is obtained without deletion by adding new knowledge about the end of the period of time for which the information holds.
A Multi-Agent System-driven AI Planning Approach to Biological Pathway Discovery As genomic and proteomic data is collected from high- throughput methods on a daily basis, subcellular com- ponents are identified and their in vitro behavior is char- acterized. However, much less is known of their in vivo activity because of the complex subcellular milieu they operate within. A component's milieu is determined by the biological pathways it participates in, and hence, the mechanisms by which it is regulated. We believe AI planning technology provides a modeling formalism for the task of biological pathway discovery, such that hypothetical pathways can be generated, queried and qualitatively simulated. The task of signal transduction pathway discovery is re-cast as a planning problem, one in which the initial and final states are known and cellu- lar processes captured as abstract operators that mod- ify the cellular environment. Thus, a valid plan that transforms the initial state into a goal state is a hypo- thetical pathway that prescribes the order of signaling events that must occur to effect the goal state. The plan- ner is driven by data that is stored within a knowledge base and retrieved from heterogeneous sources (includ- ing gene expression, protein-protein interaction and lit- erature mining) by a multi-agent information gathering system. We demonstrate the combined technology by translating the well-known EGF pathway into the plan- ning formalism and deploying the Fast-Forward planner to reconstruct the pathway directly from the knowledge base.
Hypothesizing about signaling networks The current knowledge about signaling networks is largely incomplete. Thus biologists constantly need to revise or extend existing knowledge. The revision and/or extension is first formulated as theoretical hypotheses, then verified experimentally. Many computer-aided systems have been developed to assist biologists in undertaking this challenge. The majority of the systems help in finding “patterns” in data and leave the reasoning to biologists. A few systems have tried to automate the reasoning process of hypothesis formation. These systems generate hypotheses from a knowledge base and given observations. A main drawback of these knowledge-based systems is the knowledge representation formalism they use. These formalisms are mostly monotonic and are now known to be not quite suitable for knowledge representation, especially in dealing with the inherently incomplete knowledge about signaling networks. We propose an action language based framework for hypothesis formation for signaling networks. We show that the hypothesis formation problem can be translated into an abduction problem. This translation facilitates the complexity analysis and an efficient implementation of our system. We illustrate the applicability of our system with an example of hypothesis formation in the signaling network of the p53 protein.
A Formalism for Representing and Reasoning with Temporal Information, Event and Change In this paper we present a general formalism for representing and reasoning with temporal information, event and change. The temporal framework is a theory of time that takes both points and interval as temporal primitives and where the base logic is that of Kleene's three-valued logic. Thus, we can avoid the Divided Instant Problem (DIP). We present a three-valued based Temporal First-Order Nonmonotonic Logic (TFONL) that employs an explicit representation of time and events. We may embody default logic into TFONL, which takes into consideration the frame, qualification and ramification problems.
Reasoning about non-immediate triggers in biological networks Modeling molecular interactions in signalling networks is important from various perspectives such as predicting side effects of drugs, explaining unusual cellular behavior and drug and therapy design. Various formal languages have been proposed for representing and reasoning about molecular interactions. The interactions are modeled as triggered events in most of the approaches. The triggering of events is assumed to be immediate: once an interaction is triggered, it should occur immediately. Although working well for engineering systems, this assumption poses a serious problem in modeling biological systems. Our knowledge about biological systems is inherently incomplete, thus molecular interactions are constantly elaborated and refined at different granularity of abstraction. The model of immediate triggers can not consistently deal with this refinement. In this paper we propose an action language to address this problem. We show that the language allows for refinements of biological knowledge, although at a higher cost in terms of complexity.
Semantics for a useful fragment of the situation calculus In a recent paper, we presented a new logic called ES for reasoning about the knowledge, action, and perception of an agent. Although formulated using modal operators, we argued that the language was in fact a dialect of the situation calculus but with the situation terms suppressed. This allowed us to develop a clean and workable semantics for the language without piggybacking on the generic Tarski semantics for first-order logic. In this paper, we reconsider the relation between ES and the situation calculus and show how to map sentences of ES into the situation calculus. We argue that the fragment of the situation calculus represented by ES is rich enough to handle the basic action theories defined by Reiter as well as Golog. Finally, we show that in the full second-order version of ES, almost all of the situation calculus can be accommodated.
A circumscriptive calculus of events A calculus of events is presented in which domain constraints, concurrent events, and events with non-deterministic effects can be represented. The paper offers a non-monotonic solution to the frame problem for this formalism that combines two of the techniques developed for the situation calculus, namely causal and state-based minimisation. A theorem is presented which guarantees that temporal projection will not interfere with minimisation in this solution, even in domains with ramifications, concurrency, and non-determinism. Finally, the paper shows how the formalism can be extended to cope with continuous change, whilst preserving the conditions for the theorem to apply.
A simple declarative language for describing narratives with actions We describe a simple declarative languageEfor describing the effects of a series of action occurrences within a narrative.Eis analogous to Gelfond and Lifschitz's LanguageAand its extensions, but is based on a different ontology. The semantics ofEis based on a simple characterisation of persistence which facilitates a modular approach to extending the expressivity of the language. Domain descriptions inAcan be translated to equivalent theories inE. We show how, in the context of reasoning about actions,E's narrative-based ontology may be exploited in order to characterise and synthesise two complementary notions of explanation. According to the first notion, explanation may be partly modelled as the process of suitably extending an apparently inconsistent theory written inEso as to establish consistency, thus providing a natural method, in many cases, to account for conflicting sets of information about the domain. According to the second notion, observations made at later times can sometimes be explained in terms of what is true at earlier times. This enables domains to be given an alternative characterisation in which knowledge arising from observations is appropriately separated from other aspects of the domain. We also describe howEdomains may be implemented as Event Calculus style logic programs, which facilitate automated reasoning both backwards and forwards in time, and which behave correctly even when the knowledge entailed by the domain description is incomplete.
Logic Programming and Reasoning with Incomplete Information The purpose of this paper is to expand the syntax and semanticsof logic programs and disjunctive databases to allow for the correctrepresentation of incomplete information in the presence of multipleextensions. The language of logic programs with classical negation,epistemic disjunction, and negation by failure is further expanded bynew modal operators K and M (where for the set of rules T and formulaF , KF stands for "F is known to be true by a reasoner with a set ofpremises T " and MF ...
Complexity of probabilistic planning under average rewards A general and expressive model of sequential decision making under uncertainty is provided by the Markov decision processes (MDPs) framework. Complex applications with very large state spaces are best modelled implicitly (instead of explicitly by enumerating the state space), for example as precondition-effect operators, the representation used in AI planning. This kind of representations are very powerful, and they make the construction of policies/plans computationally very complex. In many applications, average rewards over unit time is the relevant rationality criterion, as opposed to the more widely used discounted reward criterion, and for providing a solid basis for the development of efficient planning algorithms, the computational complexity of the decision problems related to average rewards has to be analyzed. We investigate the complexity of the policy/plan existence problem for MDPs under the average reward criterion, with MDPs represented in terms of conditional probabilistic precondition-effect operators. We consider policies with and without memory, and with different degrees of sensing/observability. The unrestricted policy existence problem for the partially observable cases was earlier known to be undecidable. The results place the remaining computational problems to the complexity classes EXP and NEXP (deterministic and nondeterministic exponential time.)
Compilation of a High-level Temporal Planning Language into PDDL 2.1 An important aspect of any automatic planner is the language in which the user expresses problem instances. A rich language is an advantage for the user, whereas a simple language is an advantage for the programmer who must write a program to solve all planning problems expressible in the language. Considering the temporal planning language PDDL 2.1 as a low-level language, we show how to automatically compile a much richer language into PDDL 2.1. The worst-case complexity of this transformation is quadratic. Our high-level language allows the user to declare time-points and impose simple temporal constraints between them. Conditions and effects can be imposed at time-points, over intervals and over sliding intervals within fixed intervals. Non-instantaneous transitions can also be modelled.
Deterministic distribution sort in shared and distributed memory multiprocessors We present an elegant deterministic load balancing strategy for distribution sort that is applicable to a wide variety of parallel diska and parallel memory hierarchies with both single and parallel processors. The simplest application of the strategy is an optimal deterministic algorithm for external sorting with multiple disks and parallel processors. In each input/output (1/0) operation, each of the D ~ 1 disks can simultaneously transfer a block of B contiguous records. Our two measures of performance are the number of 1/0s and the amount of work done by the CPU(s); our algorithm ia simultaneously optimal for both measures. We also show how to sort determiniatically in parallel memory hierarchies. When the processors are interconnected by any sort of a PRAM, our algorithms are optimal for all parallel memory hierarchies; when the interconnection network is a hypercube, our algorithms are either optimal or best-known.
Dynamic Knowledge Representation and Its Applications This paper has two main objectives. One is to show that the dynamic knowledge representation paradigm introduced in [ALP+00] and the associated language LUPS, defined in [APPP99], constitute natural, powerful and expressive tools for representing dynamically changing knowledge. We do so by demonstrating the applicability of the dynamic knowledge representation paradigm and the language LUPS to several broad knowledge representation domains, for each of which we provide an illustrative example. Our second objective is to extend our approach to allow proper handling of conflicting updates. So far, our research on knowledge updates was restricted to a two-valued semantics, which, in the presence of conflicting updates, leads to an inconsistent update, even though the updated knowledge base does not necessarily contain any truly contradictory information. By extending our approach to the three-valued semantics we gain the added expressiveness allowing us to express undefined or noncommittal updates.
Unsupervised (Parameter) Learning For Mrfs On Bipartite Graphs We consider unsupervised (parameter) learning for general Markov random fields on bipartite graphs. This model class includes Restricted Boltzmann Machines. We show that besides the widely used stochastic gradient approximation (a.k.a. Persistent Contrastive Divergence) there is an alternative learning approach - a modified EM algorithm which is tractable because of the bipartiteness of the model graph. We compare the resulting double loop algorithm and the PCD learning experimentally and show that the former converges faster and more stable than the latter.
1.003575
0.005648
0.005063
0.004193
0.003376
0.002761
0.002232
0.001291
0.000702
0.000045
0
0
0
0
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Autoencoders, Minimum Description Length and Helmholtz Free Energy An autoencoder network uses a set of recognition weights to convertan input vector into a code vector. It then uses a set of generative weights to convert the code vector into an approximate reconstructionof the input vector. We derive an objective function fortraining autoencoders based on the Minimum Description Length(MDL) principle. The aim is to minimize the information requiredto describe both the code vector and the reconstruction error. Weshow that this information is...
Image denoising with multi-layer perceptrons, part 1: comparison with existing algorithms and with bounds Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with plain multi layer perceptrons (MLP) applied to image patches. We will show that by training on large image databases we are able to outperform the current state-of-the-art image denoising methods. In addition, our method achieves results that are superior to one type of theoretical bound and goes a large way toward closing the gap with a second type of theoretical bound. Our approach is easily adapted to less extensively studied types of noise, such as mixed Poisson-Gaussian noise, JPEG artifacts, salt-and-pepper noise and noise resembling stripes, for which we achieve excellent results as well. We will show that combining a block-matching procedure with MLPs can further improve the results on certain images. In a second paper, we detail the training trade-offs and the inner mechanisms of our MLPs.
An up-to-date comparison of state-of-the-art classification algorithms. Up-to-date report on the accuracy and efficiency of state-of-the-art classifiers.We compare the accuracy of 11 classification algorithms pairwise and groupwise.We examine separately the training, parameter-tuning, and testing time.GBDT and Random Forests yield highest accuracy, outperforming SVM.GBDT is the fastest in testing, Naive Bayes the fastest in training. Current benchmark reports of classification algorithms generally concern common classifiers and their variants but do not include many algorithms that have been introduced in recent years. Moreover, important properties such as the dependency on number of classes and features and CPU running time are typically not examined. In this paper, we carry out a comparative empirical study on both established classifiers and more recently proposed ones on 71 data sets originating from different domains, publicly available at UCI and KEEL repositories. The list of 11 algorithms studied includes Extreme Learning Machine (ELM), Sparse Representation based Classification (SRC), and Deep Learning (DL), which have not been thoroughly investigated in existing comparative studies. It is found that Stochastic Gradient Boosting Trees (GBDT) matches or exceeds the prediction performance of Support Vector Machines (SVM) and Random Forests (RF), while being the fastest algorithm in terms of prediction efficiency. ELM also yields good accuracy results, ranking in the top-5, alongside GBDT, RF, SVM, and C4.5 but this performance varies widely across all data sets. Unsurprisingly, top accuracy performers have average or slow training time efficiency. DL is the worst performer in terms of accuracy but second fastest in prediction efficiency. SRC shows good accuracy performance but it is the slowest classifier in both training and testing.
Non-Local Manifold Tangent Learning We claim and present arguments to the effect that a large class of man- ifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation sug- gests to explore non-local manifold learning algorithms which attempt to discover shared structure in the tangent planes at different positions. A criterion for such an algorithm is proposed and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to general- ize very far from training data (on learning handwritten character image rotations), where a local non-parametric method fails.
An Information Measure For Classification
Explicit guiding auto-encoders for learning meaningful representation The auto-encoder model plays a crucial role in the success of deep learning. During the pre-training phase, auto-encoders learn a representation that helps improve the performance of the entire neural network during the fine-tuning phase of deep learning. However, the learned representation is not always meaningful and the network does not necessarily achieve higher performance with such representation because auto-encoders are trained in an unsupervised manner without knowing the specific task targeted in the fine-tuning phase. In this paper, we propose a novel approach to train auto-encoders by adding an explicit guiding term to the traditional reconstruction cost function that encourages the auto-encoder to learn meaningful features. Particularly, the guiding term is the classification error with respect to the representation learned by the auto-encoder, and a meaningful representation means that a network using the representation as input has a low classification error in a classification task. In our experiments, we show that the additional explicit guiding term helps the auto-encoder understand the prospective target in advance. During learning, it can drive the learning toward a minimum with better generalization with respect to the particular supervised task on the dataset. Over a range of image classification benchmarks, we achieve equal or superior results to baseline auto-encoders with the same configuration.
Descartes' rule of signs for radial basis function neural networks. We establish versions of Descartes' rule of signs for radial basis function (RBF) neural networks. The RBF rules of signs provide tight bounds for the number of zeros of univariate networks with certain parameter restrictions. Moreover, they can be used to infer that the Vapnik-Chervonenkis (VC) dimension and pseudodimension of these networks are no more than linear. This contrasts with previous work showing that RBF neural networks with two or more input nodes have superlinear VC dimension. The rules also give rise to lower bounds for network sizes, thus demonstrating the relevance of network parameters for the complexity of computing with RBF neural networks.
Deep learning from temporal coherence in video This work proposes a learning method for deep architectures that takes advantage of sequential data, in particular from the temporal coherence that naturally exists in unlabeled video recordings. That is, two successive frames are likely to contain the same object or objects. This coherence is used as a supervisory signal over the unlabeled data, and is used to improve the performance on a supervised task of interest. We demonstrate the effectiveness of this method on some pose invariant object and face recognition tasks.
Learning internal representations Probably the most important problem in machinelearning is the preliminary biasing of alearner's hypothesis space so that it is smallenough to ensure good generalisation fromreasonable training sets, yet large enough thatit contains a good solution to the problem beinglearnt. In this paper a mechanism for automatically learning or biasing the learner's hypothesisspace is introduced. It works by firstlearning an appropriate internal representation for a learning environment and then...
Parallel networks that learn to pronounce English text Abstract. This paper describes NETtalk, a class of massively-parallel network systems that learn to convert English text to speech. The memory representations for pronunciations are learned by practice and are shared among many processing units. The performance of NETtalk has some similarities with observed human performance. (i) The learning follows a power law. (;i) The more words the network learns, the better it is at generalizing and correctly pronouncing new words, (iii) The performance of the network degrades very slowly as connections in the network are damaged: no single link or processing unit is essential. (iv) Relearning after damage is much faster than learning during the original training. (v) Distributed or spaced prac-tice is more effective for long-term retention than massed practice. Network models can be constructed that have the same perfor-mance and learning characteristics on a particular task, but differ completely at the levels of synaptic strengths and single-unit responses. However, hierarchical clustering techniques applied to NETtalk re-veal that these different networks have similar internal representations of letter-to-sound correspondences within groups of processing units. This suggests that invariant internal representations may be found in assemblies of neurons intermediate in size between highly localized and completely distributed representations.
No-reference video quality measurement: added value of machine learning Video quality measurement is an important component in the end-to-end video delivery chain. Video quality is, however, subjective, and thus, there will always be interobserver differences in the subjective opinion about the visual quality of the same video. Despite this, most existing works on objective quality measurement typically focus only on predicting a single score and evaluate their prediction accuracies based on how close it is to the mean opinion scores (or similar average based ratings). Clearly, such an approach ignores the underlying diversities in the subjective scoring process and, as a result, does not allow further analysis on how reliable the objective prediction is in terms of subjective variability. Consequently, the aim of this paper is to analyze this issue and present a machine-learning based solution to address it. We demonstrate the utility of our ideas by considering the practical scenario of video broadcast transmissions with focus on digital terrestrial television (DTT) and proposing a no-reference objective video quality estimator for such application. We conducted meaningful verification studies on different video content (including video clips recorded from real DTT broadcast transmissions) in order to verify the performance of the proposed solution. (C) 2015 SPIE and IS&T
PI/OT: parallel I/O templates This paper presents a novel, top-down, high-level approach to parallelizing file I/O. Each parallel file descriptor is annotated with a high-level specification, or template, of the expected parallel behavior. The annotations are external to and independent of the source code. At run-time, all I/O using a parallel file descriptor adheres to the semantics of the selected template. By separating the parallel I/O specifications from the code, a user can quickly change the I/O behavior without rewriting the code. Templates can be composed hierarchically to construct complex access patterns. Two sample parallel programs using these templates are compared against versions implemented in an existing parallel I/O system (PIOUS). The sample programs show that the use of parallel I/O templates are beneficial from both the performance and software engineering points of view.
A New Algorithm for Generative Planning Existing generative planners have two properties that one would like to avoid if possible. First, they use a single mechanism to solve problems both of action selection and of action sequencing, thereby failing to exploit recent progress on scheduling and satisfiability algorithms. Second, the context in which a subgoal is solved is governed in part by the solutions to other subgoals, as opposed to plans for the subgoals being developed in isolation and then merged to yield a plan for the conjunction. We present a reformulation of the planning problem that appears to avoid these difficulties, describing an algorithm that solves subgoals in isolation and then appeals to a separate NP-complete scheduling test to determine whether the actions that have been selected can be combined in a useful way.
Deep Representation Hierarchies for 3D Active Vision - Designing Specializations in Perception-Action Loops.
1.006447
0.006667
0.006667
0.005673
0.00557
0.005556
0.002827
0.001896
0.000871
0.000059
0.000001
0
0
0
Minimum Variance-Embedded Multi-layer Kernel Ridge Regression for One-class Classification In this paper, a Multi-layer architecture is proposed by stacking minimum Variance-Embedded Kernel Ridge Regression (KRR) based Auto-Encoder in a hierarchical fashion for One-class Classification, and is referred toVMKOC. Two types of Auto-Encoders are employed for this purpose. One is vanilla Auto-Encoder and other is Variance-Embedded Auto-Encoder. The first one minimizes only reconstruction error and the latter one minimizes the intra-class variance and reconstruction error, simultaneously within the multi-layer architecture. These Auto-Encoders are employed as multiple layers to project the input features into new feature space, and the obtained projected features are passed to the last layer ofVMKOC. The last layer of VMKOC is constructed by KRR-based one class classifier. The extensive experiments are conducted on 17 benchmark datasets to verify the effectiveness ofVMKOC over 11 existing state-of-the-art kernel-based one-class classifiers. The statistical significance of the obtained outcomes is also verified by employing a Friedman test on the obtained results.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Scope Consistency: A Bridge between Release Consistency and Entry Consistency .    Systems that maintain coherence at large granularity, such as shared virtual memory systems, suffer from false sharing and extra communication. Relaxed memory consistency models have been used to alleviate these problems, but at a cost in programming complexity. Release Consistency (RC) and Lazy Release Consistency (LRC) are accepted to offer a reasonable tradeoff between performance and programming complexity. Entry Consistency (EC) offers a more relaxed consistency model, but it requires explicit association of shared data objects with synchronization variables. The programming burden of providing such associations can be substantial. This paper proposes a new consistency model for such systems, called Scope Consistency (ScC), which offers most of the performance advantages of the EC model without requiring explicit bindings between data and synchronization variables. Instead, ScC dynamically detects the associations implied by the programmer, using a programming interface similar to that of RC or LRC. We propose two ScC protocols: one that uses hardware support for fine-grained remote writes (automatic updates or AU) and the other, an all-software protocol. We compare the AU-based ScC protocol with Automatic Update Release Consistency (AURC), a modified LRC protocol that also takes advantage of automatic update support. AURC already improves performance substantially over an all-software LRC protocol. For three of the five applications we used, ScC further improves the speedups achieved by AURC by about 10%.
Adaptive cache coherence over a high bandwidth broadband mesh network Networks have traditionally been an obstacle to high performance distributed computing. Specific problems are insufficient bandwidth and long transaction latencies. While pipelining data can achieve high bandwidth, it does nothing for latency which is still a bottleneck in performance. One approach is to develop a cache coherence protocol which exploits recurring data sharing patterns to reduce the impact of latency. This paper proposes an adaptive cache coherence protocol which detects producer–consumer type sharing and maintains coherence on only those cache blocks which exhibit producer–consumer sharing via updates rather than invalidates. Execution driven simulations of this protocol show improved performance compared to a standard write-invalidate protocol protocol and a competitive update protocol. When there are no access patterns to exploit, the protocol does not degrade performance. When there is producer–consumer type sharing, the proposed protocol runs benchmarks up to 30% faster than the better of either write-invalidate or competitive update. As a side-effect, it shows improved tolerance of increasing network latency.
Combining compile-time and run-time support for efficient software distributed shared memory We describe an integrated compile time and run time system for efficient shared memory parallel computing on distributed memory machines. The combined system presents the user with a shared memory programming model. The run time system implements a consistent shared memory abstraction using memory access detection and automatic data caching. The compiler improves the efficiency of the shared memor...
A New Home-Based Software DSM Protocol for SMP Clusters This paper introduces an SMP protocol for the home-based software DSM system JIAJIA. In the protocol, intra-node processes in an SMP node share their home pages through hardware coherent sharing so as to take the full advantage of the home effect of home-based software DSMs. In contrast, cached remote pages of a process are not shared by its intra-node partners to avoid cache page conflict within an SMP. Besides, JIAJIA also implements the shared memory communication among processes within the same SMP node to accelerate intra-node communication. Performance evaluation with some well accepted benchmarks and real applications in a cluster of four two-processor nodes shows that the SMP protocol of JIAJIA reduces remote accesses, diffs, and consequently message amounts in all of the ten benchmarks and as a result obtains noticeable performance improvement in seven.
Optimizing Home-Based Software DSM Protocols Software DSMs can be categorized into homeless and home-based systems both have strengths and weaknesses when compared to each other. This paper introduces optimization methods to exploit advantages and offset disadvantages of the home-based protocol in the home-based software DSM JIAJIA. The first optimization reduces the overhead of writes to home pages through a lazy home page write detection scheme. The normal write detection scheme write-protects shared pages at the beginning of a synchronization interval, while the lazy home page write detection delays home page write-protecting until the page is first fetched in the interval so that home pages that are not cached by remote processors do not need to be write-protected. The second optimization avoids fetching the whole page on a page fault through dividing a page into blocks and fetching only those blocks that are dirty with respect to the faulting processor. A write vector table is maintained for each shared page in its home to record for each processor which block(s) has been modified since the processor fetched the page last time. The third optimization adaptively migrates home of a page to the processor most frequently writes to the page to reduce twin and diff overhead. Migration information is piggybacked on barrier messages and no additional communication is required for the migration. Performance evaluation with some well-accepted benchmarks and real applications shows that the above optimization methods can reduce page faults, message amounts, and diffs dramatically and consequently improve performance significantly.
A Comparison of Two Strategies of Dynamic Data Prefetching in Software DSM A major overhead of software DSM is the long remote access latency when the accessed page is not in the local cache. One method for tolerating the remote access latency isto prefetch the pages before they are accessed. This paper compares two methods of dynamic data prefetching-history prefetching, which utilizes the temporal locality of theprogram to prefetch, and aggregate prefetching, which utilizes the spatial locality of the program to prefetch-on the JIAJIA software DSM. Experiments with eight well-acceptedbenchmarks and a real application show that both can dramatically reduce the number of remote page faults and the number of messages exchanged. All applications benefit fromthe prefetching in overall running time, and four achieve a performance improvement of 10%-20%. We then analyze the advantages and disadvantages of the two prefetchingstrategies. We find that aggregate prefetching may be more efficient than history prefetching for most applications in software DSM systems.
The Design Of A Capability-Based Distributed Operating System
Automatic generation of help from interface design models Model-based interface design can save substantial effort in building help systems for interactive applications by generating help automatically from the model used to implement the interface, and by providing a framework for developers to easily refine the automatically-generated help texts. This paper describes a system that generates hypertext-based help about data presented in application displays, commands to manipulate data, and interaction techniques to invoke commands. The refinement component provides several levels of customization , including programming-by-example techniques to let developers edit directly help windows that the system produces, and the possibility to refine help generation rules.
Automatic I/O hint generation through speculative execution Aggressive prefetching is an effective technique for reducing the execution times of disk-bound applications; that is, applications that manipulate data too large or too infrequently used to be found in file or disk caches. While automatic prefetching approaches based on static analysis or historical access patterns are effective for some workloads, they are not as effective as manually-driven (programmer-inserted) prefetching for applications with irregular or input-dependent access patterns. In this paper; we propose to exploit whatever processor cycles are left idle while an application is stalled on I/O by using these cycles to dynamically analyze the application and predict its future I/O accesses. Our approach is to speculatively pre-execute the application's code in order to discover and issue hints for its future read accesses. Coupled with an aggressive hint-driven prefetching system, this automatic approach could be applied to arbitrary applications, and should be particularly effective for those with irregular and, up to a point, input-dependent access patterns.We have designed and implemented a binary modification tool, called "SpecHint", that transforms Digital UNIX application binaries to perform speculative execution and issue hints. TIP [Patterson95], an informed prefetching and caching manager; takes advantage of these application-generated hints to better use the file cache and I/O resources. Ne evaluate our design and implementation with three real-world, disk-bound applications from the TIP benchmark suite. While our techniques are currently unsophisticated, they perform surprisingly well. Without any manual modifications, Ice achieve 29%, 69% and 70% reductions in execution time when the data files are striped over four disks, improving performance by the same amount as manually-hinted prefetching for two of our three applications. We examine the performance of our design in a variety of configurations, explaining the circumstances under which it falls short of that achieved when applications were manually modified to issue hints. Through simulation, Mle also estimate how the performance of our design will be affected by the widening gap between processor and disk speeds.
Disk caching in large database and timeshared systems We present the results of a variety of trace-driven simulations of disk cache designs using traces from a variety of mainframe timesharing and database systems in production use. We compute miss ratios, run lengths, traffic ratios, cache residency times, degree of memory pollution and other statistics for a variety of designs, varying lock size, prefetching algorithm and write algorithm. We find that for this workload, sequential prefetching produces a significant (about 20%) but still limited improvement in the miss ratio, even using a powerful technique for detecting sequentiality. Copy-back writing decreased write traffic relative to write-through by more than 50%; periodic flushing of the dirty blocks increased write traffic only slightly compared to pure write-back, and then only for large cache sizes. Write-allocate had little effect compared to no-write-allocate. Block sizes of over a track don't appear to be useful. Limiting cache occupancy by a single process or transaction appears to have little effect. This study is unique in the variety and quality of the data used in the studies
Unsupervised Learning of Multiple Motifs in Biopolymers Using Expectation Maximization The MEME algorithm extends the expectation maximization (EM) algorithm for identifying motifs in unaligned biopolymer sequences. The aim of MEME is to discover new motifs in a set of biopolymer sequences where little or nothing is known in advance about any motifs that may be present. MEME innovations expand the range of problems which can be solved using EM and increase the chance of finding good solutions. First, subsequences which actually occur in the biopolymer sequences are used as starting points for the EM algorithm to increase the probability of finding globally optimal motifs. Second, the assumption that each sequence contains exactly one occurrence of the shared motif is removed. This allows multiple appearances of a motif to occur in any sequence and permits the algorithm to ignore sequences with no appearance of the shared motif, increasing its resistance to noisy data. Third, a method for probabilistically erasing shared motifs after they are found is incorporated so that several distinct motifs can be found in the same set of sequences, both when different motifs appear in different sequences and when a single sequence may contain multiple motifs. Experiments show that MEME can discover both the CRP and LexA binding sites from a set of sequences which contain one or both sites, and that MEME can discover both the −10 and −35 promoter regions in a set of E. coli sequences.
A feedback-driven proportion allocator for real-rate scheduling In this paper we propose changing the decades-old practice of allocating CPU to threads based on priority to a scheme based on proportion and period. Our scheme allocates to each thread a percentage of CPU cycles over a period of time, and uses a feedback-based adaptive scheduler to assign automatically both proportion and period. Applications with known requirements, such as isochronous software devices, can bypass the adaptive scheduler by specifying their desired proportion and/or period. As a result, our scheme provides reservations to applications that need them, and the benefits of proportion. and period to chose that do not. Adaptive scheduling using proportion and period has several distinct benefits over either fixed or adaptive priority based schemes: finer grain control of allocation, lower variance in the amount of cycles allocated to a thread, and avoidance of accidental priority inversion and starvation, including defense against denial-of-service attacks. This paper describes our design of an adaptive controller and proportion-period scheduler its implementation in Linux, and presents experimental validation of our approach.
A Markov Decision Problem Approach to Goal Attainment A new Markov decision problem (MDP)-based method for managing goal attainment (GA), which is the process of planning and controlling actions that are related to the achievement of a set of defined goals in the presence of resource and time constraints, is proposed. Specifically, we address the problem as one of optimally selecting a sequence of actions to transform the system and/or its environment from an initial state to a desired state. We begin with a method of explicitly mapping an action-GA graph to an MDP graph and developing a dynamic programming (DP) recursion to solve the MDP problem. For larger problems having exponential complexity with respect to the number of goals, we propose guided search algorithms such as AO*, AOepsiv*, and greedy search techniques, whose search power rests on the efficiency of their heuristic evaluation functions (HEFs). Our contribution in this part stems from the introduction of a new problem-specific HEF to aid the search process. We demonstrate reductions in the computational costs of the proposed techniques through performance comparison with standard DP techniques. We conclude this paper with a method to address situations in which alternative strategies (e.g., second best) are required. The new extended AO* algorithm identifies alternative control sequences for attaining the organizational goals.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.052751
0.052621
0.052621
0.028128
0.020297
0.008502
0.000159
0.000006
0.000001
0
0
0
0
0
Word-level sequential memory abstraction for model checking Many designs intermingle large memories with wide data paths and nontrivial control. Verifying such systems is challenging, and users often get little traction when applying model checking to decide full or partial end-to-end correctness of such designs. Interestingly, a subclass of these systems can be proven correct by reasoning only about a small number of the memory entries at a limited number of time points. In this paper, we leverage this fact to abstract certain memories in a sound way, and we demonstrate how our memory abstraction coupled with an abstraction refinement algorithm can be used to prove correctness properties for three challenging designs from industry and academia. Key features of our approach are that we operate on standard safety property verification problems, that we proceed completely automatically without any need for abstraction hints, that we can use any bit-level model checker as a back-end decision procedure, and that our algorithms fit seamlessly into a standard transformational verification paradigm.
Integrating linear arithmetic into superposition calculus We present a method of integrating linear rational arithmetic into superposition calculus for first-order logic. One of our main results is completeness of the resulting calculus under some finiteness assumptions.
High capacity and automatic functional extraction tool for industrial VLSI circuit designs In this paper we present an advanced functional extraction tool for automatic generation of high-level RTL from switch-level circuit netlist representation. The tool is called FEV-Extract and is part of a comprehensive Formal Equivalence Verification (FEV) system developed at Intel to verify modern microprocessor designs. FEV-Extract employs a powerful hierarchical analysis procedure, and advanced and generic algorithms for automatic recognition of logical primitives, to cope with variety of circuit design styles and their complexity. Logic equations are then extracted to generate a behavioral RTL model described in industrial standard HDL languages, to be used in the formal equivalence verification, logic simulation, synthesis and testability flows.
A Refinement Method for Validity Checking of Quantified First-Order Formulas in Hardware Verification We introduce a heuristic for automatically checking the validity of first-order formulas of the form \forall \alpha ^m \exists \beta ^n. \Psi \left( {\alpha ^m ,\beta ^n } \right) that are encountered in inductive proofs of hardware correctness. The heuristic introduced in this paper is used to automatically check the validity of k-step induction formulas needed to verify hardware designs. The heuristic works on word-level designs that can have data and address buses of arbitrary widths. Our refinement heuristic relies on the idea of predicate instantiation introduced in [2]. The heuristic proves quantified formulas by the use of a validity checker, CVC [21], and a first-order theorem prover, Otter [16]. Our heuristic can be used as a stand-alone technique to verify word-level designs or as a component in an interactive theorem prover. We show the effectiveness of this heuristic for hardware verification by verifying a number of hardware designs completely automatically. The large size of the quantified formulas encountered in these examples shows the effectiveness of our heuristic as a component of a theorem prover.
The design and implementation of VAMPIRE In this article we describe VAMPIRE: a high-performance theorem prover for first-order logic. As our description is mostly targeted to the developers of such systems and specialists in automated reasoning, it focuses on the design of the system and some key implementation features. We also analyze the performance of the prover at CASC-JC.
On the complexity of blocks-world planning In this paper, we show that in the best-known version of the blocks world (andseveral related versions), planning is difficult, in the sense that finding an optimal planis NP-hard. However, the NP-hardness is not due to deleted-condition interactions, butinstead due to a situation which we call a deadlock. For problems that do not containdeadlocks, there is a simple hill-climbing strategy that can easily find an optimal plan,regardless of whether or not the problem contains any...
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
A theory of diagnosis from first principles Without Abstract
Dependent Fluents We discuss the persistence of the indirect ef­ fects of an action—the question when such ef­ fects are subject to the commonsense law of in­ ertia, and how to describe their evolution in the cases when inertia does not apply. Our model of nonpersistent effects involves the assumption that the value of the fluent in question is deter­ mined by the values of other fluents, although the dependency may be partially or completely unknown. This view leads us to a new high- level action language ARD (for Actions, Ram­ ifications and Dependencies) that is capable of describing both persistent and nonpersistent ef­ fects. Unlike the action languages introduced in the past, ARD is "non-Markovia n," in the sense that the evolution of the fluents described in this language may depend on their history, and not only on their current values.
Monotonic reductions, representative equivalence, and compilation of intractable problems The idea of preprocessing part of the input of a problem in order to improve efficiency has been employed by several researchers in several areas of computer science. In this article, we show sufficient conditions to prove that an intractable problem cannot be efficiently solved even allowing an exponentially long preprocessing phase. The generality of such conditions is shown by applying them to various problems coming from different fields. While the results may seem to discourage the use of compilation, we present some evidence that such negative results are useful in practice.
Multi-threading and remote latency in software DSMs This paper evaluates the use of per-node multi-threading to hide remote memory and synchronization latencies in a software DSM. As with hardware systems, multi-threading in software systems can be used to reduce the costs of remote requests by switching threads when the current thread blocks. We added multi-threading to the CVM software DSM and evaluated its impact on performance for a suite of common shared memory programs. Multi-threading resulted in speed improvements of at least 17% in three of the seven applications in our suite, and lesser improvements in the other applications. However, we found that: good performance is not always achievable transparently for non-trivial applications; multi-threading can negatively interact with DSM operations; multi-threading decreases cache and TLB locality; and any multi-threading speedup is dependent on available work.
Phoenix: a safe in-memory file system Phoenix contains two timestamped versions of the in-memory file system allowing for a reserve version that ensures safety for diskless computers with battery-powered memeory.
Representing the process semantics in the situation calculus This paper presents a formal method based on the high-level semantics of processes to reason about continuous change. With a case study we show how the semantics of processes can be integrated with the situation calculus. The soundness and completeness of situation calculus with respect to the process semantics are proven. Furthermore, the logical programming is implemented to support the semantics of processes with the situation calculus.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.101681
0.103361
0.103361
0.051681
0.027252
0
0
0
0
0
0
0
0
0
Server-side prefetching in distributed file systems. This paper presents a proactive data prefetching mechanism on storage servers for distributed file systems to achieve better input/output I/O performance. This mechanism requires keeping tracks of ...
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Representing concurrent actions in extended logic programming Gelfond and Lifschitz introduce a declarative language A for describing effects of actions and define a translation of theories in this language into extended logic programs(ELP's). The purpose of this paper is to extend the language and the translation to allow reasoning about the effects of concurrent actions. Logic programming formalization of situation calculus with concurrent actions presented in the paper can be of independent interest and may serve as a test bed for the investigation of various transformations and logic programming inference mechanisms.
Hypothesizing about signaling networks The current knowledge about signaling networks is largely incomplete. Thus biologists constantly need to revise or extend existing knowledge. The revision and/or extension is first formulated as theoretical hypotheses, then verified experimentally. Many computer-aided systems have been developed to assist biologists in undertaking this challenge. The majority of the systems help in finding “patterns” in data and leave the reasoning to biologists. A few systems have tried to automate the reasoning process of hypothesis formation. These systems generate hypotheses from a knowledge base and given observations. A main drawback of these knowledge-based systems is the knowledge representation formalism they use. These formalisms are mostly monotonic and are now known to be not quite suitable for knowledge representation, especially in dealing with the inherently incomplete knowledge about signaling networks. We propose an action language based framework for hypothesis formation for signaling networks. We show that the hypothesis formation problem can be translated into an abduction problem. This translation facilitates the complexity analysis and an efficient implementation of our system. We illustrate the applicability of our system with an example of hypothesis formation in the signaling network of the p53 protein.
Cognitive Technical Systems -- What Is the Role of Artificial Intelligence? The newly established cluster of excellence CoTeSysinvestigates the realization of cognitive capabilities such as perception, learning, reasoning, planning, and execution for technical systems including humanoid robots, flexible manufacturing systems, and autonomous vehicles. In this paper we describe cognitive technical systems using a sensor-equipped kitchen with a robotic assistant as an example. We will particularly consider the role of Artificial Intelligence in the research enterprise.Key research foci of Artificial Intelligence research in CoTeSysinclude (茂戮驴) symbolic representations grounded in perception and action, (茂戮驴) first-order probabilistic representations of actions, objects, and situations, (茂戮驴) reasoning about objects and situations in the context of everyday manipulation tasks, and (茂戮驴) the representation and revision of robot plans for everyday activity.
State Event Logic In this article we give a detailed presentation of state event logic which is a modal logic for reasoning about concurrent events and causality between events [8] State event logic differs from previous approaches in the following directions: First, events enjoy the same attention as states. In the same way as states can be viewed as models of the formulae describing the facts that hold in them we...
Default Theory for Well Founded Semantics with Explicit Negation One aim of this paper is to define a default theory for Well Founded Semantics of logic programs which have been extended with explicit negation, such that the models of a program correspond exactly to the extensions of the default theory corresponding to the program.
Logic Programming and Reasoning with Incomplete Information The purpose of this paper is to expand the syntax and semanticsof logic programs and disjunctive databases to allow for the correctrepresentation of incomplete information in the presence of multipleextensions. The language of logic programs with classical negation,epistemic disjunction, and negation by failure is further expanded bynew modal operators K and M (where for the set of rules T and formulaF , KF stands for "F is known to be true by a reasoner with a set ofpremises T " and MF ...
A new definition of SLDNF-resolution We propose a new, "top-down" definition of SLDNF-resolution which retains the spiritof the original definition but avoids the difficulties noted in the literature. We compare itwith the "bottom-up" definition of Kunen [Kun89].1 The problemThe notion of SLD-resolution of Kowalski [Kow74] allows us to resolve only positive literals.As a result it is not adequate to compute with general programs. Clark [Cla79] proposed toincorporate the negation as finite failure rule. This leads to an...
Nested abnormality theories Abstract: We propose a new approach to the use of circumscription for representingknowledge. Nested abnormality theories are similar to simple abnormality theoriesintroduced by McCarthy, except that their axioms may have a nested structure,with each level corresponding to another application of the circumscriptionoperator. The new style of applying circumscription sometimes leads to moreeconomical and elegant formalizations. Mathematical properties of nested abnormalitytheories may be easier...
Planning as refinement search: a unified framework for evaluating design tradeoffs in partial-order planning Despite the long history of classical planning, there has be en very little comparative analysis of the performance tradeoffs offered by the multit ude of existing planning al- gorithms. This is partly due to the many different vocabularies within which planning algorithms are usually expressed. In this paper we show that refinement search provides a unifying framework within which various planning algorithms can be cast and compared. Specifically, we will develop refinement search semantics for planning, provide a gener- alized algorithm for refinement planning, and show that planners that search in the space of (partial) plans are specific instantiations of this algo rithm. The different design choices in partial order planning correspond to the different ways o f instantiating the generalized algorithm. We will analyze how these choices affect the search-space size and refinement cost of the resultant planner, and show that in most cases they trade one for the other. Finally, we will concentrate on two specific design choices, viz., protection strategies and tractability refinements, and develop some hypotheses regarding the effect of these choices on the performance on practical problems. We will support these hypotheses with a series of focused empirical studies.
Conformant planning via symbolic model checking We tackle the problem of planning in nondeterministic domains, by presenting a new approach to conformant planning. Conformant planning is the problem of finding a sequence of actions that is guaranteed to achieve the goal despite the nondeterminism of the domain. Our approach is based on the representation of the planning domain as a finite state automaton. We use Symbolic Model Checking techniques, in particular Binary Decision Diagrams, to compactly represent and efficiently search the automaton. In this paper we make the following contributions. First, we present a general planning algorithm for conformant planning, which applies to fully nondeterministic domains, with uncertainty in the initial condition and in action effects. The algorithm is based on a breadth-first, backward search, and returns conformant plans of minimal length, if a solution to the planning problem exists, otherwise it terminates concluding that the problem admits no conformant solution. Second, we provide a symbolic representation of the search space based on Binary Decision Diagrams (BDDs), which is the basis for search techniques derived from symbolic model checking. The symbolic representation makes it possible to analyze potentially large sets of states and transitions in a single computation step, thus providing for an efficient implementation. Third, we present CMBP (Conformant Model Based Planner), an efficient implementation of the data structures and algorithm described above, directly based on BDD manipulations, which allows for a compact representation of the search layers and an efficient implementation of the search steps. Finally, we present an experimental comparison of our approach with the state-of-the-art conformant planners CGP, QBFPLAN and GPT. Our analysis includes all the planning problems from the distribution packages of these systems, plus other problems defined to stress a number of specific factors. Our approach appears to be the most effective: CMBP is strictly more expressive than QBFPLAN and CGP and, in all the problems where a comparison is possible, CMBP outperforms its competitors, sometimes by orders of magnitude.
On the Unique Satisfiability Problem
A trace-driven analysis of the UNIX 4.2 BSD file system
SODA: sensitivity based optimization of disk architecture Storage plays a pivotal role in the performance of many applications. Optimizing disk architectures is a design-time as well as a run-time issue and requires balancing between performance, power and capacity. The design space is large and there are many "knobs" that can be used to optimize disk drive behavior. Here we present a sensitivity-based optimization for disk architectures (SODA) which leverages results from digital circuit design. Using detailed models of the electro-mechanical behavior of disk drives and a suite of realistic workloads, we show how SODA can aid in design and runtime optimization.
Unsupervised (Parameter) Learning For Mrfs On Bipartite Graphs We consider unsupervised (parameter) learning for general Markov random fields on bipartite graphs. This model class includes Restricted Boltzmann Machines. We show that besides the widely used stochastic gradient approximation (a.k.a. Persistent Contrastive Divergence) there is an alternative learning approach - a modified EM algorithm which is tractable because of the bipartiteness of the model graph. We compare the resulting double loop algorithm and the PCD learning experimentally and show that the former converges faster and more stable than the latter.
1.00636
0.009756
0.006504
0.00617
0.005007
0.003739
0.002532
0.00143
0.00043
0.000019
0
0
0
0
Time Series Compression Based on Adaptive Piecewise Recurrent Autoencoder. Time series account for a large proportion of the data stored in financial, medical and scientific databases. The efficient storage of time series is important in practical applications. In this paper, we propose a novel compression scheme for time series. The encoder and decoder are both composed by recurrent neural networks (RNN) such as long short-term memory (LSTM). There is an autoencoder between encoder and decoder, which encodes the hidden state and input together and decodes them at the decoder side. Moreover, we pre-process the original time series by partitioning it into segments with various lengths which have similar total variation. The experimental study shows that the proposed algorithm can achieve competitive compression ratio on real-world time series.
Lightweight Lossy Compression of Biometric Patterns via Denoising Autoencoders Wearable Internet of Things (IoT) devices permit the massive collection of biosignals (e.g., heart-rate, oxygen level, respiration, blood pressure, photo-plethysmographic signal, etc.) at low cost. These, can be used to help address the individual fitness needs of the users and could be exploited within personalized healthcare plans. In this letter, we are concerned with the design of lightweight and efficient algorithms for the lossy compression of these signals. In fact, we underline that compression is a key functionality to improve the lifetime of IoT devices, which are often energy constrained, allowing the optimization of their internal memory space and the efficient transmission of data over their wireless interface. To this end, we advocate the use of autoencoders as an efficient and computationally lightweight means to compress biometric signals. While the presented techniques can be used with any signal showing a certain degree of periodicity, in this letter we apply them to ECG traces, showing quantitative results in terms of compression ratio, reconstruction error and computational complexity. State of the art solutions are also compared with our approach.
Unsupervised feature extraction with autoencoder trees. The autoencoder is a popular neural network model that learns hidden representations of unlabeled data. Typically, single- or multilayer perceptrons are used in constructing an autoencoder, but we use soft decision trees (i.e., hierarchical mixture of experts) instead. Such trees have internal nodes that implement soft multivariate splits through a gating function and all leaves are weighted by the gating values on their path to get the output. The encoder tree converts the input to a lower dimensional representation in its leaves, which it passes to the decoder tree that reconstructs the original input. Because the splits are soft, the encoder and decoder trees can be trained back to back with stochastic gradient-descent to minimize reconstruction error. In our experiments on handwritten digits, newsgroup posts, and images, we observe that the autoencoder trees yield as small and sometimes smaller reconstruction error when compared with autoencoder perceptrons. One advantage of the tree is that it learns a hierarchical representation at different resolutions at its different levels and the leaves specialize at different local regions in the input space. An extension with locally linear mappings in the leaves allows a more flexible model. We also show that the autoencoder tree can be used with multimodal data where a mapping from one modality (i.e., image) to another (i.e., topics) can be learned.
Representation learning via Dual-Autoencoder for recommendation. Recommendation has provoked vast amount of attention and research in recent decades. Most previous works employ matrix factorization techniques to learn the latent factors of users and items. And many subsequent works consider external information, e.g., social relationships of users and items’ attributions, to improve the recommendation performance under the matrix factorization framework. However, matrix factorization methods may not make full use of the limited information from rating or check-in matrices, and achieve unsatisfying results. Recently, deep learning has proven able to learn good representation in natural language processing, image classification, and so on. Along this line, we propose a new representation learning framework called Recommendation via Dual-Autoencoder (ReDa). In this framework, we simultaneously learn the new hidden representations of users and items using autoencoders, and minimize the deviations of training data by the learnt representations of users and items. Based on this framework, we develop a gradient descent method to learn hidden representations. Extensive experiments conducted on several real-world data sets demonstrate the effectiveness of our proposed method compared with state-of-the-art matrix factorization based methods.
A few useful things to know about machine learning Tapping into the \"folk knowledge\" needed to advance machine learning applications.
Sparse Feature Learning for Deep Belief Networks Unsupervised learning algorithms aim to discover the structure hidden in the data, and to learn representations that are more suitable as input to a supervised machine than the raw input. Many unsupervised methods are based on reconstructing the input from the representation, while constraining the repr esentation to have cer- tain desirable properties (e.g. low dimension, sparsity, e tc). Others are based on approximating density by stochastically reconstructing t he input from the repre- sentation. We describe a novel and efficient algorithm to lea rn sparse represen- tations, and compare it theoretically and experimentally with a similar machine trained probabilistically, namely a Restricted Boltzmann Machine. We propose a simple criterion to compare and select different unsupervised machines based on the trade-off between the reconstruction error and the info rmation content of the representation. We demonstrate this method by extracting features from a dataset of handwritten numerals, and from a dataset of natural image patches. We show that by stacking multiple levels of such machines and by training sequentially, high-order dependencies between the input observed variables can be captured.
Deep Machine Learning - A New Frontier in Artificial Intelligence Research [Research Frontier] This article provides an overview of the mainstream deep learning approaches and research directions proposed over the past decade. It is important to emphasize that each approach has strengths and "weaknesses, depending on the application and context in "which it is being used. Thus, this article presents a summary on the current state of the deep machine learning field and some perspective into how it may evolve. Convolutional Neural Networks (CNNs) and Deep Belief Networks (DBNs) (and their respective variations) are focused on primarily because they are well established in the deep learning field and show great promise for future work.
Statistical Parametric Speech Synthesis Using Deep Neural Networks Conventional approaches to statistical parametric speech synthesis typically use decision tree-clustered context-dependent hidden Markov models (HMMs) to represent probability densities of speech parameters given texts. Speech parameters are generated from the probability densities to maximize their output probabilities, then a speech waveform is reconstructed from the generated parameters. This approach is reasonably effective but has a couple of limitations, e.g. decision trees are inefficient to model complex context dependencies. This paper examines an alternative scheme that is based on a deep neural network (DNN). The relationship between input texts and their acoustic realizations is modeled by a DNN. The use of the DNN can address some limitations of the conventional approach. Experimental results show that the DNN-based systems outperformed the HMM-based systems with similar numbers of parameters.
Nonlinear autoassociation is not equivalent to PCA. A common misperception within the neural network community is that even with nonlinearities in their hidden layer, autoassociators trained with backpropagation are equivalent to linear methods such as principal component analysis (PCA). Our purpose is to demonstrate that nonlinear autoassociators actually behave differently from linear methods and that they can outperform these methods when used for latent extraction, projection, and classification. While linear autoassociators emulate PCA, and thus exhibit a flat or unimodal reconstruction error surface, autoassociators with nonlinearities in their hidden layer learn domains by building error reconstruction surfaces that, depending on the task, contain multiple local valleys. This interpolation bias allows nonlinear autoassociators to represent appropriate classifications of nonlinear multimodal domains, in contrast to linear autoassociators, which are inappropriate for such tasks. In fact, autoassociators with hidden unit nonlinearities can be shown to perform nonlinear classification and nonlinear recognition.
TextTiling: segmenting text into multi-paragraph subtopic passages TextTiling is a technique for subdividing texts into multi-paragraph units that represent passages, or subtopics. The discourse cues for identifying major subtopic shifts are patterns of lexical co-occurrence and distribution. The algorithm is fully implemented and is shown to produce segmentation that corresponds well to human judgments of the subtopic boundaries of 12 texts. Multi-paragraph subtopic segmentation should be useful for many text analysis tasks, including information retrieval and summarization.
Queueing models of RAID systems with maxima of waiting times A queueing model is developed that approximates the effect of synchronizations at parallel service completion instants. Exact results are first obtained for the maxima of independent exponential random variables with arbitrary parameters, and this is followed by a corresponding approximation for general random variables, which reduces to the exact result in the exponential case. This approximation is then used in a queueing model of RAID (Redundant Array of Independent Disks) systems, in which accesses to multiple disks occur concurrently and complete only when every disk involved has completed. We consider the two most common RAID variants, RAID0-1 and RAID5, as well as a multi-RAID system in which they coexist. This can be used to model adaptive multi-level RAID systems in which the RAID level appropriate to an application is selected dynamically. The random variables whose maximum has to be computed in these applications are disk response times, which are modelled by the waiting times in M/G/1 queues. To compute the mean value of their maximum requires the second moment of queueing time and we obtain this in terms of the third moment of disk service time, itself a function of seek time, rotational latency and block transfer time. Sub-models for these quantities are investigated and calibrated individually in detail. Validation against a hardware simulator shows good agreement at all traffic intensity levels, including the threshold for practical operation above which performance deteriorates sharply.
Lean clause-sets: generalizations of minimally unsatisfiable clause-sets We study the problem of (efficiently) deleting such clauses from conjunctive normal forms (clause-sets) which cannot contribute to any proof of unsatisfiability. For that purpose we introduce the notion of an autarky system A, which detects deletion of superfluous clauses from a clause-set F and yields a canonical normal form NA(F) ⊆ F. Clause-sets where no clauses can be deleted are called A-lean, a natural weakening of minimally unsatisfiable clause-sets opening the possibility for combinatorial approaches and including also satisfiable instances. Three special examples for autarky systems are considered: general autarkies, linear autarkies (based on linear programming) and matching autarkies (based on matching theory). We give new characterizations of ("absolutely") lean clause-sets in terms of qualitative matrix analysis, while matching lean clause-sets are characterized in terms of deficiency (the difference between the number of clauses and the number of variables), by having a cyclic associated transversal matroid, and also in terms of fully indecomposable matrices. Finally we discuss how to obtain polynomial time satisfiability decision for clause-sets with bounded deficiency, and we make a few steps towards a general theory of autarky systems.
Extended ephemeral logging: log storage management for applications with long lived transactions Extended ephemeral logging (XEL) is a new technique for managing a log of database activity subject to the general assumption that the lifetimes of an application’s transactions may be statistically distributed over a wide range. The log resides on nonvolatile disk storage and provides fault tolerance to system failures (in which the contents of volatile main memory storage may be lost). XEL segments a log into a chain of fixed-size FIFO queues and performs generational garbage collection on records in the log. Log records that are no longer necessary for recovery purposes are “thrown away” when they reach the head of a queue; only records that are still needed for recovery are forwarded from the head of one queue to the tail of the next. XEL does not require checkpoints, permits fast recovery after a crash and is well suited for applications that have a wide distribution of transaction lifetimes. Quantitative evaluation of XEL via simulation indicates that it can significantly reduce the disk space required for the log, at the expense of slightly higher bandwidth for log information and more main memory; the reduced size of the log permits much faster recovery after a crash as well as cost savings. XEL can significantly reduce both the disk space and the disk bandwidth required for log information in a system that has been augmented with a nonvolatile region of main memory.
Mobile Robot Control Using a Cloud of Particles. Common control systems for mobile robots include the use of deterministic control laws together with state estimation approaches and the consideration of the certainty equivalence principle. Recent approaches consider the use of partially observable Markov decision process strategies together with Bayesian estimators. In order to reduce the required processing power and yet allow for multimodal or non-Gaussian distributions, a scheme based on a particle filter and a corresponding cloud of input signals is proposed in this paper. Results are presented and compared to a scheme with extended Kalman filter and the assumption that the certainty equivalence holds.
1.22
0.11
0.11
0.055
0.02
0.002755
0.000472
0.000029
0.000004
0
0
0
0
0
Data Protection by Logic Programming This paper discusses the representation of a variety of role-based access control (RBAC) security models in which users and permissions may be assigned to roles for restricted periods of time. These security models are formulated as logic programs which specify the security information which protects data, and from which a user's permission to perform operations on data items may be determined by theorem-proving. The representation and verification of integrity constraints on these logic programs is described, and practical issues are considered together with the technical results which apply to the approach.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Exploring Gate-Limited Analytical Models for High Performance Network Storage Servers
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Evaluation techniques for storage hierarchies The design of efficient storage hierarchies generally involves the repeated running of "typical" program address traces through a simulated storage system while various hierarchy design parameters are adjusted. This paper describes a new and efficient method of determining, in one pass of an address trace, performance measures for a large class of demand-paged, multilevel storage systems utilizing a variety of mapping schemes and replacement algorithms. The technique depends on an algorithm classification, called "stack algorithms," examples of which are "least frequently used," "least recently used," "optimal," and "random replacement" algorithms. The techniques yield the exact access frequency to each storage device, which can be used to estimate the overall performance of actual storage hierarchies.
CRAMM: virtual memory support for garbage-collected applications Existing virtual memory systems usually work well with applications written in C and C++, but they do not provide adequate support for garbage-collected applications. The performance of garbage-collected applications is sensitive to heap size. Larger heaps reduce the frequency of garbage collections, making them run several times faster. However, if the heap is too large to fit in the available RAM, garbage collection can trigger thrashing. Existing Java virtual machines attempt to adapt their application heap sizes to fit in RAM, but suffer performance degradations of up to 94% when subjected to bursts of memory pressure. We present CRAMM (Cooperative Robust Automatic Memory Management), a system that solves these problems. CRAMM consists of two parts: (1) a new virtual memory system that collects detailed reference information for (2) an analytical model tailored to the underlying garbage collection algorithm. The CRAMM virtual memory system tracks recent reference behavior with low overhead. The CRAMM heap sizing model uses this information to compute a heap size that maximizes throughput while minimizing paging. We present extensive empirical results demonstrating CRAMM's ability to maintain high performance in the face of changing application and system load.
Small cache, big effect: provable load balancing for randomly partitioned cluster services Load balancing requests across a cluster of back-end servers is critical for avoiding performance bottlenecks and meeting service-level objectives (SLOs) in large-scale cloud computing services. This paper shows how a small, fast popularity-based front-end cache can ensure load balancing for an important class of such services; furthermore, we prove an O(n log n) lower-bound on the necessary cache size and show that this size depends only on the total number of back-end nodes n, not the number of items stored in the system. We validate our analysis through simulation and empirical results running a key-value storage system on an 85-node cluster.
Dynamic resource allocation for database servers running on virtual storage As increasing amounts of valuable information are produced and persist digitally, the ability to determine the origin of data becomes important. In science, medicine, commerce, and government, data provenance tracking is essential for rights protection, ...
Mortar: filling the gaps in data center memory Data center servers are typically overprovisioned, leaving spare memory and CPU capacity idle to handle unpredictable workload bursts by the virtual machines running on them. While this allows for fast hotspot mitigation, it is also wasteful. Unfortunately, making use of spare capacity without impacting active applications is particularly difficult for memory since it typically must be allocated in coarse chunks over long timescales. In this work we propose re- purposing the poorly utilized memory in a data center to store a volatile data store that is managed by the hypervisor. We present two uses for our Mortar framework: as a cache for prefetching disk blocks, and as an application-level distributed cache that follows the memcached protocol. Both prototypes use the framework to ask the hypervisor to store useful, but recoverable data within its free memory pool. This allows the hypervisor to control eviction policies and prioritize access to the cache. We demonstrate the benefits of our prototypes using realistic web applications and disk benchmarks, as well as memory traces gathered from live servers in our university's IT department. By expanding and contracting the data store size based on the free memory available, Mortar improves average response time of a web application by up to 35% compared to a fixed size memcached deployment, and improves overall video streaming performance by 45% through prefetching.
WSCLOCK—a simple and effective algorithm for virtual memory management A new virtual memory management algorithm WSCLOCK has been synthesized from the local working set (WS) algorithm, the global CLOCK algorithm, and a new load control mechanism for auxiliary memory access. The new algorithm combines the most useful feature of WS—a natural and effective load control that prevents thrashing—with the simplicity and efficiency of CLOCK. Studies are presented to show that the performance of WS and WSCLOCK are equivalent, even if the savings in overhead are ignored.
RACE: A Robust Adaptive Caching Strategy for Buffer Cache While many block replacement algorithms for buffer caches have been proposed to address the well-known drawbacks of the LRU algorithm, they are not robust and cannot maintain an consistent performance improvement over all workloads. This paper proposes a novel and simple replacement scheme, called RACE (Robust Adaptive buffer Cache management schemE), which differentiates the locality of I/O streams by actively detecting access patterns inherently exhibited in two correlated spaces: the discrete block space of program contexts from which I/O requests are issued and the continuous block space within files to which I/O requests are addressed. This scheme combines global I/O regularities of an application and local I/O regularities of individual files accessed in that application to accurately estimate the locality strength, which is crucial in deciding which blocks are to be replaced upon a cache miss. Through comprehensive simulations on eight real-application traces, RACE is shown to significantly outperform LRU and all other state-of-the-art cache management schemes studied in this paper, in terms of absolute hit ratios. Specifically, it improves the absolute hit ratios of LRU, UBM, PCC and AMP by as much as 56.9%, 22.5%, 42.7% and 39.9%, with an average of 15.5%, 3.3%, 9.4% and 8.8%, respectively. Given the relatively high buffer cache miss penalties, which typically are six orders of magnitude higher than buffer cache hit times, these gains of hit ratios obtained by RACE are likely to imply significant performance gains in applications' response times as well.
Write-only disk caches With recent declines in the cost of semiconductor memory and the increasing need for high performance I/O disk systems, it makes sense to consider the design of large caches. In this paper, we consider the effect of caching writes. We show that cache sizes in the range of a few percent allow writes to be performed at negligible or no cost and independently of locality considerations.
A study of integrated prefetching and caching strategies Prefetching and caching are effective techniques for improving the performance of file systems, but they have not been studied in an integrated fashion. This paper proposes four properties that optimal integrated strategies for prefetching and caching must satisfy, and then presents and studies two such integrated strategies, called aggressive and conservative. We prove that the performance of the conservative approach is within a factor of two of optimal and that the performance of the aggressive strategy is a factor significantly less than twice that of the optimal case. We have evaluated these two approaches by trace-driven simulation with a collection of file access traces. Our results show that the two integrated prefetching and caching strategies are indeed close to optimal and that these strategies can reduce the running time of applications by up to 50%.
Self-tuning wireless network power management Current wireless network power management often substantially degrades performance and may even increase overall energy usage when used with latency-sensitive applications. We propose self-tuning power management (STPM) that adapts its behavior to the access patterns and intent of applications, the characteristics of the network interface, and the energy usage of the platform. We have implemented STPM as a Linux kernel module--our results show substantial benefits for distributed file systems, streaming audio, and thin-client applications. Compared to default 802.11b power management, STPM reduces the total energy usage of an iPAQ running the Coda distributed file system by 21% while also reducing interactive file system delay by 80%. Further, STPM adapts to diverse operating conditions: it yields good results on both laptops and handhelds, supports 802.11b network interfaces with substantially different characteristics, and performs well across a range of application network access patterns.
A project on high performance I/0 subsystems
Encoding Planning Problems in Nonmonotonic Logic Programs . We present a framework for encoding planning problemsin logic programs with negation as failure, having computational efficiencyas our major consideration. In order to accomplish our goal, webring together ideas from logic programming and the planning systemsgraphplan and satplan. We discuss different representations of planningproblems in logic programs, point out issues related to their performance,and show ways to exploit the structure of the domains in theserepresentations....
DMP3: A Dynamic Multilayer Perceptron Construction Algorithm This paper presents a method for constructing multilayer perceptron networks (MLPs) called DMP3 (Dynamic Multilayer Perceptron 3). DMP3 differs from other MLP construction techniques in several important ways. The motivation for these differences and how they can lead to improved performance are discussed in detail in this paper. The DMP3 algorithm constructs MLPs by incrementally adding network elements to the output node of the network. Dependent upon the reduction in network error, the complexity of new elements that are added to the network can increase slightly with each growth cycle of the algorithm. As new elements are added to the network, the existing network structure is frozen and only the weights of the new elements are trained. In addition, the weights which link the new elements to the existing network structure are initially set to predetermined values, which predisposes each new network element to perform a particular function in relation to the existing network structure which can decrease the amount of time required for training the new elements. Information gain rather than error minimization is used to guide the growth of the network, which increases the utility of newly added network elements and decreases the likelihood that a premature dead end in the growth of the network will occur. A short, improvement driven training cycle is used to train new network elements which naturally helps to prevent over learning and memorization. The performance of DMP3 is compared with that of several other well-know machine learning and neural network learning algorithms (c4.5, cn2, ib1, CV based MLP architecture selection, c4, id3, perceptron, and mml) on 9 real world data sets taken from the UCI machine learning database. Simulation results show that DMP3 performs better (on average) than any of the other algorithms on the data sets tested.
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
1.008254
0.009038
0.006897
0.003595
0.003448
0.002301
0.001285
0.000677
0.000236
0.000057
0.000003
0
0
0
H-Code: A Hybrid MDS Array Code to Optimize Partial Stripe Writes in RAID-6 RAID-6 is widely used to tolerate concurrent failures of any two disks to provide a higher level of reliability with the support of erasure codes. Among many implementations, one class of codes called {\bfseries{M}}aximum {\bfseries{D}}istance {\bfseries{S}}eparable ({\bfseries{MDS}}) codes aims to offer data protection against disk failures with optimal storage efficiency. Typical MDS codes contain horizontal and vertical codes. Due to the horizontal parity, in the case of \emph{partial stripe write} (refers to I/O operations that write new data or update data to a subset of disks in an array) in a row, horizontal codes may get less I/O operations in most cases, but suffer from unbalanced I/O distribution. They also have limitation on high single write complexity. Vertical codes improve single write complexity compared to horizontal codes, while they still suffer from poor performance in partial stripe writes. In this paper, we propose a new XOR-based MDS array code, named Hybrid Code (H-Code), which optimizes partial stripe writes for RAID-6 by taking advantages of both horizontal and vertical codes. H-Code is a solution for an array of $(p+1)$ disks, where $p$ is a prime number. Unlike other codes taking a dedicated anti-diagonal parity strip, H-Code uses a special anti-diagonal parity layout and distributes the anti-diagonal parity elements among disks in the array, which achieves a more balanced I/O distribution. On the other hand, the horizontal parity of H-Code ensures a partial stripe write to continuous data elements in a row share the same row parity chain, which can achieve optimal partial stripe write performance. Not only within a row but also within a stripe, H-Code offers optimal partial stripe write complexity to two continuous data elements and optimal partial stripe write performance among all MDS codes to the best of our knowledge. Specifically, compared to RDP and EVENODD codes, H-Code reduces I/O cost by up to $15.54%$ and $22.17%$. Overall, H-code has optimal storage efficiency, optimal encoding/decoding computational complexity, optimal complexity of both single write and partial stripe write.
Evaluating the impact of Undetected Disk Errors in RAID systems Despite the reliability of modern disks, recent studies have made it clear that a new class of faults, UndetectedDisk Errors (UDEs) also known as silent data corruption events, become a real challenge as storage capacity scales. While RAID systems have proven effective in protecting data from traditional disk failures, silent data corruption events remain a significant problem unaddressed by RAID. We present a fault model for UDEs, and a hybrid framework for simulating UDEs in large-scale systems. The framework combines a multi-resolution discrete event simulator with numerical solvers. Our implementation enables us to model arbitrary storage systems and workloads and estimate the rate of undetected data corruptions. We present results for several systems and workloads, from gigascale to petascale. These results indicate that corruption from UDEs is a significant problem in the absence of protection schemes and that such schemes dramatically decrease the rate of undetected data corruption.
Exploiting Decoding Computational Locality to Improve the I/O Performance of an XOR-Coded Storage Cluster under Concurrent Failures In today's large data centers, hundreds to thousands of nodes are deployed as storage clusters to provide cloud and big data storage service, where failures are not rare. Therefore, efficient data redundancy technologies are needed to ensure data availability and reliability. Compared to traditional technology based on replication, erasure codes which tolerate multiple failures provide availability and reliability at a much lower cost. However, those erasure-coded, particularly XOR-coded storage clusters, suffer from performance problem caused by degraded reads under concurrent node failures. With the traditional centralized decoding method, a large amount of extra data has to be transmitted over the network to service degraded reads. In particular, the degraded reads in XOR-coded stripes with concurrent failures result in notably high network traffic. To address this problem, we propose a novel decoding approach called Local Decoding First or LDF for short. Via exploiting decoding computational locality of XOR-coded storage clusters, LDF significantly reduces the required network traffic and hence reduces the access latency of degraded reads, thus improving I/O throughput. A prototype of LDF with two typical XOR codes has been implemented in the popular distributed file system HDFS on a storage cluster composed of 40 nodes. The experimental results show that LDF dramatically reduces the network traffic under concurrent node failures and thus improves both the I/O throughput and access latency.
Efficient parity placement schemes for tolerating triple disk failures in RAID architectures This paper proposes two improved triple parity placement schemes, the HDD1 (horizontal and dual diagonal) and HDD2 schemes, to enhance the reliability of a RAID system. Both schemes can tolerate up to three disk failures by using three types of parity information (horizontal, diagonal, and anti-diagonal parities) in RAID disk block partitions. The HDD1 scheme can reduce the occurrences of bottlenecks because its horizontal and anti-diagonal parities are uniformly distributed over a disk array, while diagonal parities are placed in a dedicated disk. The HDD2 scheme uses one more disk than HDD1 to store the horizontal parities and an additional diagonal parity, while the anti-diagonal and the diagonal parities are placed in the same way as in the HDD1 scheme, only with a minor difference. The encoding and decoding algorithms of both schemes are simple and effective. Many of the steps of the encoding and decoding algorithms can be executed in parallel. Both schemes enable a RAID to recover rapidly from up to three disk failures, with a single algorithm applied straightforwardly.
Implementation and Evaluation of a Popularity-Based Reconstruction Optimization Algorithm in Availability-Oriented Disk Arrays In this paper, we implement the incorporation of a Popularity-based multi-threaded Reconstruction Optimization algorithm, PRO, into the recovery mechanism of the Linux software RAID (MD), which is a well-known and widely-used availability-oriented disk array scheme. To evaluate the impact of PRO on RAID-structured storage systems such as MD, we conduct extensive trace-driven experiments. Our results demonstrate PRO's significant performance advantage over the existing reconstruction schemes, especially on a RAID-5 disk array, in terms of the measured reconstruction time and response time.
Making LRU friendly to weak locality workloads: a novel replacement algorithm to improve buffer cache performance Although the LRU replacement algorithm has been widely used in buffer cache management, it is well-known for its inability to cope with access patterns with weak locality. Previously proposed algorithms to improve LRU greatly increase complexity and/or cannot provide consistently improved performance. Some of the algorithms only address LRU problems on certain specific and predefined cases. Motivated by the limitations of existing algorithms, we propose a general and efficient replacement algorithm, called Low Inter-reference Recency Set (LIRS). LIRS effectively addresses the limitations of LRU by using recency to evaluate Inter-Reference Recency (IRR) of accessed blocks for making a replacement decision. This is in contrast to what LRU does: directly using recency to predict the next reference time. Meanwhile, LIRS mostly retains the simple assumption adopted by LRU for predicting future block access behaviors. Conducting simulations with a variety of traces of different access patterns and with a wide range of cache sizes, we show that LIRS significantly outperforms LRU and outperforms other existing replacement algorithms in most cases. Furthermore, we show that the additional cost for implementing LIRS is trivial in comparison with that of LRU. We also show that the LIRS algorithm can be extended into a family of replacement algorithms, in which LRU is a special member.
Data cache management using frequency-based replacement We propose a new frequency-based replacement algorithm for managing caches used for disk blocks by a file system, database management system, or disk control unit, which we refer to here as data caches. Previously, LRU replacement has usually been used for such caches. We describe a replacement algorithm based on the concept of maintaining reference counts in which locality has been “factored out”. In this algorithm replacement choices are made using a combination of reference frequency and block age. Simulation results based on traces of file system and I/O activity from actual systems show that this algorithm can offer up to 34% performance improvement over LRU replacement, where the improvement is expressed as the fraction of the performance gain achieved between LRU replacement and the theoretically optimal policy in which the reference string must be known in advance. Furthermore, the implementation complexity and efficiency of this algorithm is comparable to one using LRU replacement.
Proceedings of the 1988 ACM SIGMOD International Conference on Management of Data, Chicago, Illinois, June 1-3, 1988
Logic programs with classical negation
Exokernel: an operating system architecture for application-level resource management Traditional operating systems limit the performance, flexibility, and functionality of applications by fixing the interface and implementation of operating system abstractions such as interprocess communication and virtual memory. The exokernel operating system architecture addresses this problem by providing application-level management of physical resources. In the exokernel architecture, a small kernel securely exports all hardware resources through a lowlevel interface to untrusted library operating systems. Library operating systems use this interface to implement system objects and policies. This separation of resource protection from management allows application-specific customization of traditional operating system abstractions by extending, specializing, or even replacing libraries. We have implemented a prototype exokernel operating system. Measurements show that most primitive kernel operations (such as exception handling and protected control transfer) are ten to 100 times faster than in Ultrix, a mature monolithic UNIX operating system. In addition, we demonstrate that an exokemel allows applications to control machine resources in ways not possible in traditional operating systems. For instance, virtual memory and interprocess communication abstractions are implemented entirely within an application-level library. Measurements show that application-level virtual memory and interprocess communication primitives are five to 40 times faster than Ultrix’s kernel primitives. Compared to state-of-the-art implementations from the literature, the prototype exokemel system is at least five times faster on operations such as exception dispatching and interprocess communication.
Why not negation by fixpoint? There is a fixpoint semantics for DATALOG programs with negation that is a natural generalization of the standard semantics for DATALOG programs without negation. We show that, unfortunately, several compelling complexity-theoretic obstacles rule out its efficient implementation. As an alternative, we propose Inflationary DATALOG, an efficiently implementable cemantics for negation,based on inflationarv flxpoints.
On optimal degree selection for polynomial kernel with support vector machines: Theoretical and empirical investigations The key challenge in kernel based learning algorithms is the choice of an appropriate kernel and its optimal parameters. Selecting the optimal degree of a polynomial kernel is critical to ensure good generalisation of the resulting support vector machine model. In this paper we propose Bayesian and Laplace approximation methods to estimate the polynomial degree. A rule based meta-learning approach is then proposed for automatic polynomial kernel and its optimal degree selection. The new approach is constructed and tested on different sizes of 112 datasets with binary class as well as multi class classification problems. An extensive computational evaluation of these methods is conducted, and rules are generated to determine when these approximation methods are appropriate.
Adaptive placement of method executions within a customizable distributed object-based runtime system: design, implementation and performance Abstract: This paper presents the design and implementation of a mechanism aimed at enhancing the performance of distributed object-based applications. This goal is achieved by means of a new algorithm implementing placement of method executions that adapts to processors' load and to objects' characteristics, the latter allowing to approximate the cost of methods' re-mote execution The behavior of the proposed placement algorithm is examined by providing performance measures obtained from its integration within a customizable distributed object-based runtime system. In particular, the cost of method executions using our algorithm is compared with the cost resulting from the standard placement technique that consists of executing any method on the storing node of its embedding object.
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
1.033889
0.033333
0.033333
0.023333
0.008333
0.003704
0.000595
0.000007
0
0
0
0
0
0
An erasure-resilient encoding system for flexible reading and writing in storage networks We introduce the Read-Write-Coding-System (RWC), a very flexible class of linear block codes that generate efficient and flexible erasure codes for storage networks. In particular, given a message x of k symbols and a codeword y of n symbols, an RW code defines additional parameters k≤ r,w≤ n that offer enhanced possibilities to adjust the fault-tolerance capability of the code. More precisely, an RWC provides linear (n,r,d)-codes that have: (a) minimum (Hamming) distance d = n-r+1 for any two codewords, and (b) for any codeword y1 there exists a codeword y2 with distance of at most w. Furthermore, depending on the values r,w and the code alphabet, different block codes such as parity codes (e.g., RAID 4/5) or Reed-Solomon (RS) codes (if r = k and thus, w = n) can be generated. In storage networks in which I/O accesses are very costly and redundancy is crucial, this flexibility has considerable advantages as r and w can optimally be adapted to read or write intensive applications; only w symbols must be updated if the message x changes completely, which is different from other codes that always need to rewrite y completely as x changes. In this article, we first state a tight lower bound and basic conditions for all RW codes. Furthermore, we introduce special RW codes in which all mentioned parameters are adjustable even online, that is, RW codes which are adaptive to changing demands. At last, we investigate the question for which choices of (k,r,w,n) a coding system exists over the binary alphabet F2 = {0,1} and discuss how RW codes can be combined.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Feature Incay for Representation Regularization. Softmax-based loss is widely used in deep learning for multi-class classification, where each class is represented by a weight vector and each sample is represented as a feature vector. Different from traditional learning algorithms where features are pre-defined and only weight vectors are tunable through training, feature vectors are also tunable as representation learning in deep learning. Thus we investigate how to improve the classification performance by better adjusting the features. One main observation is that elongating the feature norm of both correctly-classified and mis-classified feature vectors improves learning: (1) increasing the feature norm of correctly-classified examples induce smaller training loss; (2) increasing the feature norm of mis-classified examples can upweight the contribution from hard examples. Accordingly, we propose feature incay to regularize representation learning by encouraging larger feature norm. In contrast to weight decay which shrinks the weight norm, feature incay is proposed to stretch the feature norm. Extensive empirical results on MNIST, CIFAR10, CIFAR100 and LFW demonstrate the effectiveness of feature incay.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Operationalizing Conflict and Cooperation between Automated Software Agents in Wikipedia: A Replication and Expansion of 'Even Good Bots Fight'. This paper replicates, extends, and refutes conclusions made in a study published in PLoS ONE ("Even Good Bots Fight"), which claimed to identify substantial levels of conflict between automated software agents (or bots) in Wikipedia using purely quantitative methods. By applying an integrative mixed-methods approach drawing on trace ethnography, we place these alleged cases of bot-bot conflict into context and arrive at a better understanding of these interactions. We found that overwhelmingly, the interactions previously characterized as problematic instances of conflict are typically better characterized as routine, productive, even collaborative work. These results challenge past work and show the importance of qualitative/quantitative collaboration. In our paper, we present quantitative metrics and qualitative heuristics for operationalizing bot-bot conflict. We give thick descriptions of kinds of events that present as bot-bot reverts, helping distinguish conflict from non-conflict. We computationally classify these kinds of events through patterns in edit summaries. By interpreting found/trace data in the socio-technical contexts in which people give that data meaning, we gain more from quantitative measurements, drawing deeper understandings about the governance of algorithmic systems in Wikipedia. We have also released our data collection, processing, and analysis pipeline, to facilitate computational reproducibility of our findings and to help other researchers interested in conducting similar mixed-method scholarship in other platforms and contexts.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Neural network-based adaptive dynamic surface control for a class of uncertain nonlinear systems in strict-feedback form. The dynamic surface control (DSC) technique was developed recently by Swaroop et al. This technique simplified the backstepping design for the control of nonlinear systems in strict-feedback form by overcoming the problem of "explosion of complexity." It was later extended to adaptive backstepping design for nonlinear systems with linearly parameterized uncertainty. In this paper, by incorporating this design technique into a neural network based adaptive control design framework, we have developed a backstepping based control design for a class of nonlinear systems in strict-feedback form with arbitrary uncertainty. Our development is able to eliminate the problem of "explosion of complexity" inherent in the existing method. In addition, a stability analysis is given which shows that our control law can guarantee the uniformly ultimate boundedness of the solution of the closed-loop system, and make the tracking error arbitrarily small.
Robust and adaptive backstepping control for nonlinear systems using RBF neural networks. In this paper, two different backstepping neural network (NN) control approaches are presented for a class of affine nonlinear systems in the strict-feedback form with unknown nonlinearities. By a special design scheme, the controller singularity problem is avoided perfectly in both approaches. Furthermore, the closed loop signals are guaranteed to be semiglobally uniformly ultimately bounded and the outputs of the system are proved to converge to a small neighborhood of the desired trajectory. The control performances of the closed-loop systems can be shaped as desired by suitably choosing the design parameters. Simulation results obtained demonstrate the effectiveness of the approaches proposed. The differences observed between the inputs of the two controllers are analyzed briefly.
Adaptive Dynamic Surface Control of Flexible-Joint Robots Using Self-Recurrent Wavelet Neural Networks A new method for the robust control of flexible-joint (FJ) robots with model uncertainties in both robot dynamics and actuator dynamics is proposed. The proposed control system is a combination of the adaptive dynamic surface control (DSC) technique and the self-recurrent wavelet neural network (SRWNN). The adaptive DSC technique provides the ability to overcome the "explosion of complexity" problem in backstepping controllers. The SRWNNs are used to observe the arbitrary model uncertainties of FJ robots, and all their weights are trained online. From the Lyapunov stability analysis, their adaptation laws are induced, and the uniformly ultimately boundedness of all signals in a closed-loop adaptive system is proved. Finally, simulation results for a three-link FJ robot are utilized to validate the good position tracking performance and robustness against payload uncertainties and external disturbances of the proposed control system.
Robust Adaptive Position Mooring Control for Marine Vessels In this paper, robust adaptive control with dynamic control allocation is proposed for the positioning of marine vessels equipped with a thruster assisted mooring system, in the presence of parametric uncertainties, unknown disturbances and input nonlinearities. Using neural network approximation and variable structure based techniques in combination with backstepping and Lyapunov synthesis, the positioning control is developed to handle the uncertainties, input saturation and dead-zone characteristics of the mooring lines and thrusters. Full state feedback with all states measurable and output feedback using high gain observer to estimate unmeasurable states are considered. Dynamic control allocation is presented for actuation of the position mooring system. Under the proposed robust adaptive control, semi-global uniform boundedness of the closed-loop signals are guaranteed. Numerical simulations are carried out to show the effectiveness of the proposed control.
Precise Positioning of Nonsmooth Dynamic Systems Using Fuzzy Wavelet Echo State Networks and Dynamic Surface Sliding Mode Control. This paper presents a precise positioning robust hybrid intelligent control scheme based on the effective compensation of nonsmooth nonlinearities, such as friction, deadzone, and uncertainty in a dynamic system. A new adaptive fuzzy wavelet echo state network algorithm is proposed to improve performance in terms of approximating unknown uncertainties in conventional neural network algorithms. A s...
A combined backstepping and small-gain approach to robust adaptive fuzzy control for strict-feedback nonlinear systems In this paper, a robust adaptive tracking control problem is discussed for a general class of strict-feedback uncertain nonlinear systems. The systems may possess a wide class of uncertainties referred to as unstructured uncertainties, which are not linearly parameterized and do not have any prior knowledge of the bounding functions. The Takagi-Sugeno type fuzzy logic systems are used to approximate the uncertainties. A unified and systematic procedure is employed to derive two kinds of novel robust adaptive tracking controllers by use of the input-to-state stability (ISS) and by combining the backstepping technique and generalized small gain approach. One is the robust adaptive fuzzy tracking controller (RAFTC) for the system without input gain uncertainty. The other is the robust adaptive fuzzy sliding tracking controller (RAFSTC) for the system with input gain uncertainty. Both algorithms have two advantages, those are, semi-global uniform ultimate boundedness of adaptive control system in the presence of unstructured uncertainties and the adaptive mechanism with minimal learning parameterizations. Four application examples, including a pendulum system with motor, a one-link robot, a ship roll stabilization with actuator and a single-link manipulator with flexible joint, are used to demonstrate the effectiveness and performance of proposed schemes.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
An Efficient Unification Algorithm
Tree clustering for constraint networks The paper offers a systematic way of regrouping constraints into hierarchical structures capable of supporting search without backtracking. The method involves the formation and preprocessing of an acyclic database that permits a large variety of queries and local perturbations to be processed swiftly, either by sequential backtrack-free procedures, or by distributed constraint propagation processes.
Bootstrapping with Noise: An Effective Regularization Technique Abstract: Bootstrap samples with noise are shown to be an effective smoothness and capacity controltechnique for training feed-forward networks and for other statistical methods such as generalizedadditive models. It is shown that noisy bootstrap performs best in conjunction with weight decayregularization and ensemble averaging. The two-spiral problem, a highly non-linear noise-freedata, is used to demonstrate these findings. The combination of noisy bootstrap and ensembleaveraging is also...
Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition We present an unsupervised method for learning a hi- erarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extrac - tor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each fil- ter output within adjacent windows, and a point-wise sig- moid non-linearity. A second level of larger and more in- variant features is obtained by training the same algorithm on patches of features from the first level. Training a su- pervised classifier on these features yields 0.64% error on MNIST, and 54% average recognition rate on Caltech 101 with 30 training samples per category. While the result- ing architecture is similar to convolutional networks, the layer-wise unsupervised training procedure alleviates th e over-parameterization problems that plague purely super- vised learning procedures, and yields good performance with very few labeled training samples.
Dynamic resource allocation for database servers running on virtual storage As increasing amounts of valuable information are produced and persist digitally, the ability to determine the origin of data becomes important. In science, medicine, commerce, and government, data provenance tracking is essential for rights protection, ...
Destage Algorithms for Disk Arrays with Nonvolatile Caches In a disk array with a nonvolatile write cache, destages from the cache to the disk are performed in the background asynchronously while read requests from the host system are serviced in the foreground. In this paper, we study a number of algorithms for scheduling destages in a RAID-5 system. We introduce a new scheduling algorithm, called linear threshold scheduling, that adaptively varies the rate of destages to disks based on the instantaneous occupancy of the write cache. The performance of the algorithm is compared with that of a number of alternative scheduling approaches, such as least-cost scheduling and high/low mark. The algorithms are evaluated in terms of their effectiveness in making destages transparent to the servicing of read requests from the host, disk utilization, and their ability to tolerate bursts in the workload without causing an overflow of the write cache. Our results show that linear threshold scheduling provides the best read performance of all the algorithms compared, while still maintaining a high degree of burst tolerance. An approximate implementation of the linear-threshold scheduling algorithm is also described. The approximate algorithm can be implemented with much lower overhead, yet its performance is virtually identical to that of the ideal algorithm.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.033175
0.033129
0.032455
0.026018
0.024481
0.01853
0
0
0
0
0
0
0
0
RAIF: Redundant Array of Independent Filesystems Storage virtualization and data management are well known problems for individual users as well as large organizations. Existing storage-virtualization systems either do not support a complete set of possible storage types, do not provide flexible data-placement policies, or do not support per-file conversion (e.g., encryption). This results in suboptimal utilization of resources, inconvenience, low reliability, and poor performance. We have designed a stackable file system called redundant array of independent filesystems (RAIF). It combines the data survivability and performance benefits of traditional RAID with the flexibility of composition and ease of development of stackable file systems. RAIF can be mounted on top of directories and thus on top of any combination of network, distributed, disk-based, and memory-based file systems. Individual files can be replicated, striped, or stored with erasure-correction coding on any subset of the underlying file systems. RAIF has similar performance to RAID. In configurations with parity, RAIF's write performance is better than the performance of driver-level and even entry-level hardware RAID systems. This is because RAIF has better control over the data and parity caching.
Active Names: flexible location and transport of wide-area resources In this paper, we explore flexible name resolution as a way of supporting extensibility for wide-area distributed services. Our approach, called Active Names, maps names to a chain of mobile programs that can customize how a service is located and how its results are transformed and transported back to the client. To illustrate the properties of our system, we implemented prototypes of server selection based on end-to-end performancemeasurements, location-independent data trans-formation,and caching of composable active objects and demonstrate up to a five-fold performance improvement to end users relative to protocols in widespread use.We show how these new services are developed, composed, and secured in our framework.Finally, we develop a set of algorithms to control how mobile ActiveName programs are mapped onto available wide-area resources to optimize performance and availability.
Stupid file systems are better File systems were originally designed for hosts with only one disk. Over the past 20 years, a number of increasingly complicated changes have optimized the performance of file systems on a single disk. Over the same time, storage systems have advanced on their own, separated from file systems by the narrow block interface. Storage systems have increasingly employed parallelism and virtualization. Parallelism seeks to increase throughput and strengthen fault-tolerance. Virtualization employs additional levels of data addressing indirection to improve system flexibility and lower administration costs. Do the optimizations of file systems make sense for current storage systems? In this paper, I show that the performance of a current advanced local file system is sensitive to the virtualization parameters of its storage system. Sometimes random block layout outperforms smart file system layout. In addition, random block layout stabilizes performance across several virtualization parameters. This approach has the advantage of immunizing file systems to changes in their underlying storage systems.
Efficient parity placement schemes for tolerating triple disk failures in RAID architectures This paper proposes two improved triple parity placement schemes, the HDD1 (horizontal and dual diagonal) and HDD2 schemes, to enhance the reliability of a RAID system. Both schemes can tolerate up to three disk failures by using three types of parity information (horizontal, diagonal, and anti-diagonal parities) in RAID disk block partitions. The HDD1 scheme can reduce the occurrences of bottlenecks because its horizontal and anti-diagonal parities are uniformly distributed over a disk array, while diagonal parities are placed in a dedicated disk. The HDD2 scheme uses one more disk than HDD1 to store the horizontal parities and an additional diagonal parity, while the anti-diagonal and the diagonal parities are placed in the same way as in the HDD1 scheme, only with a minor difference. The encoding and decoding algorithms of both schemes are simple and effective. Many of the steps of the encoding and decoding algorithms can be executed in parallel. Both schemes enable a RAID to recover rapidly from up to three disk failures, with a single algorithm applied straightforwardly.
Umbrella file system: Storage management across heterogeneous devices With the advent of and recent developments in Flash storage, device characteristic diversity is becoming both more prevalent and more distinct. In this article, we describe the Umbrella File System (UmbrellaFS), a stackable file system designed to provide flexibility in matching diversity of file access characteristics to diversity of device characteristics through a user or system administrator specified policy. We present the design and results from a prototype implementation of UmbrellaFS on both Linux 2.4 and 2.6. The results show that UmbrellaFS has little overhead for most file system operations while providing an ability better to utilize the differences in Flash and traditional hard drives. With appropriate use of rules, we have shown improvements of up to 44% in certain situations.
Minerva: An automated resource provisioning tool for large-scale storage systems Enterprise-scale storage systems, which can contain hundreds of host computers and storage devices and up to tens of thousands of disks and logical volumes, are difficult to design. The volume of choices that need to be made is massive, and many choices have unforeseen interactions. Storage system design is tedious and complicated to do by hand, usually leading to solutions that are grossly over-provisioned, substantially under-performing or, in the worst case, both.To solve the configuration nightmare, we present minerva: a suite of tools for designing storage systems automatically. Minerva uses declarative specifications of application requirements and device capabilities; constraint-based formulations of the various sub-problems; and optimization techniques to explore the search space of possible solutions.This paper also explores and evaluates the design decisions that went into Minerva, using specialized micro- and macro-benchmarks. We show that Minerva can successfully handle a workload with substantial complexity (a decision-support database benchmark). Minerva created a 16-disk design in only a few minutes that achieved the same performance as a 30-disk system manually designed by human experts. Of equal importance, Minerva was able to predict the resulting system's performance before it was built.
A performance evaluation of RAID architectures In today's computer systems, the disk I/O subsystem is often identified as the major bottleneck to system performance. One proposed solution is the so called redundant array of inexpensive disks (RAID). We examine the performance of two of the most promising RAID architectures, the mirrored array and the rotated parity array. First, we propose several scheduling policies for the mirrored array and a new data layout, group-rotate declustering, and compare their performance with each other and in combination with other data layout schemes. We observe that a policy that routes reads to the disk with the smallest number of requests provides the best performance, especially when the load on the I/O system is high. Second, through a combination of simulation and analysis, we compare the performance of this mirrored array architecture to the rotated parity array architecture. This latter study shows that: 1) given the same storage capacity (approximately double the number of disks), the mirrored array considerably outperforms the rotated parity array; and 2) given the same number of disks, the mirrored array still outperforms the rotated parity array in most cases, even for applications where I/O requests are for large amounts of data. The only exception occurs when the I/O size is very large; most of the requests are writes, and most of these writes perform full stripe write operations
ARC: A Self-Tuning, Low Overhead Replacement Cache We consider the problem of cache management in a demand paging scenario with uniform page sizes. We propose a new cache management policy, namely, Adaptive Replacement Cache (ARC), that has several advantages.In response to evolving and changing access patterns, ARC dynamically, adaptively, and continually balances between the recency and frequency components in an online and self-tuning fashion. The policy ARC uses a learning rule to adaptively and continually revise its assumptions about the workload.The policy ARC is empirically universal, that is, it empirically performs as well as a certain fixed replacement policy-even when the latter uses the best workload-specific tuning parameter that was selected in an offline fashion. Consequently, ARC works uniformly well across varied workloads and cache sizes without any need for workload specific a priori knowledge or tuning. Various policies such as LRU-2, 2Q, LRFU, and LIRS require user-defined parameters, and, unfortunately, no single choice works uniformly well across different workloads and cache sizes.The policy ARC is simple-to-implement and, like LRU, has constant complexity per request. In comparison, policies LRU-2 and LRFU both require logarithmic time complexity in the cache size.The policy ARC is scan-resistant: it allows one-time se-quential requests to pass through without polluting the cache.On 23 real-life traces drawn from numerous domains, ARC leads to substantial performance gains over LRU for a wide range of cache sizes. For example, for a SPC1 like synthetic benchmark, at 4GB cache, LRU delivers a hit ratio of 9.19% while ARC achieves a hit ratio of 20%.
Informed prefetching of collective input/output requests
IBM TotalStorage Enterprise Storage Server: A designer's view In this paper, we describe the background, objectives, and major decisions associated with the design of IBM TotalStorageâ聞¢ Enterprise Storage Server脗® (ESS), IBM's high-end disk storage system. We first present a brief history of disk storage development over the past three decades and then describe ESS architecture and basic functions. Next we discuss the goals associated with the design of ESS and the methods used to achieve these goals. We then explore some design decisions that significantly affected ESS architecture and performance, and we conclude with some comments about possible future enhancements.
Reasoning about action I: a possible worlds approach Reasoning about change is an important aspect of commonsense reasoning and planning.In this paper we describe an approach to reasoning about change for rich domains whereit is not possible to anticipate all situations that might occur. The approach provides asolution to the frame problem, and to the related problem that it is not always reasonable toexplicitly specify all of the consequences of actions. The approach involves keeping a singlemodel of the world that is updated when actions...
Explicit and implicit indeterminism reasoning about uncertain and contradictory specifications of dynamic systems A high-level action semantics for specifying and reasoning about dynamic systems is presented which supports both uncertain knowledge (taken as explicit indeterminism) and contradictory information (taken as implicit indeterminism). We start by developing an action description language for intentionally representing nondeterministic actions in dynamic systems. We then study the different possibilities of interpreting contradictory specifications of concurrent actions. We argue that the most reasonable interpretation which allows for exploiting as much information as possible, is to take such conflicts as implicit indeterminism. As the second major contribution, we present a calculus for our resulting action semantics based on the logic programming paradigm including negation-as-failure and equational theories. Soundness and completeness of this encoding wrt. the notion of entailment in our action language is proved by taking the completion semantics for equational logic programs with negation.
An efficient scheme for providing high availability Replication at the partition level is a promising approach for increasing availability in a Shared Nothing architecture. We propose an algorithm for maintaining replicas with little overhead during normal failure-free processing. Our mechanism updates the secondary replica in an asynchronous manner: entire dirty pages are sent to the secondary at some time before they are discarded from primary's buffer. A log server node (hardened against failures) maintains the log for each node. If a primary node fails, the secondary fetches the log from the log server, applied it to its replica, and brings itself to the primary's last transaction-consistent state. We study the performance of various policies for sending pages to secondary and the corresponding trade-offs between recovery time and overhead during failure-free processing.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.03001
0.030071
0.014337
0.010476
0.007143
0.003429
0.001185
0.000296
0.000039
0.000008
0
0
0
0
An Effective QBF Solver for Planning Problems A large number of applications can be represented by quantified Boolean formulas (QBF). Although evaluating QBF is NP-hard and thus very difficult, there has been significant progress in the development of QBF solvers. These solvers require the quantified Boolean formula to be in a standard format. We have encountered a large class of problems whose representation as QBF is not in that standard format. If we apply current state-of-the-art QBF solvers, the required transformation into standard format increases the size of the formula and tends to hide structural properties of the problem class. We suggest a direct attack of the problem. The solution algorithm is based on backtracking search and on a new form of learning clauses. We have tested a first implementation of the algorithm on a class of planning problems. The tests show that the approach is significantly faster than current state-of-the-art QBF solvers.
Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in SAT-Based Planning In Verification and in (optimal) AI Planning, a successful method is to formulate the application as boolean satisfiability ( SAT), and solve it with state-of-the-art DPLL-based procedures. There is a lack of understanding of why this works so well. Focussing on the Planning context, we identify a form of problem structure concerned with the symmetrical or asymmetrical nature of the cost of achieving the individual planning goals. We quantify this sort of structure with a simple numeric parameter called AsymRatio, ranging between 0 and 1. We run experiments in 10 benchmark domains from the International Planning Competitions since 2000; we show that AsymRatio is a good indicator of SAT solver performance in 8 of these domains. We then examine carefully crafted synthetic planning domains that allow control of the amount of structure, and that are clean enough for a rigorous analysis of the combinatorial search space. The domains are parameterized by size, and by the amount of structure. The CNFs we examine are unsatisfiable, encoding one planning step less than the length of the optimal plan. We prove upper and lower bounds on the size of the best possible DPLL refutations, under different settings of the amount of structure, as a function of size. We also identify the best possible sets of branching variables (backdoors). With minimum AsymRatio, we prove exponential lower bounds, and identify minimal backdoors of size linear in the number of variables. With maximum AsymRatio, we identify logarithmic DPLL refutations ( and backdoors), showing a doubly exponential gap between the two structural extreme cases. The reasons for this behavior - the proof arguments - illuminate the prototypical patterns of structure causing the empirical behavior observed in the competition benchmarks.
The good, the bad, and the odd: cycles in answer-set programs Backdoors of answer-set programs are sets of atoms that represent "clever reasoning shortcuts" through the search space. Assignments to backdoor atoms reduce the given program to several programs that belong to a tractable target class. Previous research has considered target classes based on notions of acyclicity where various types of cycles (good and bad cycles) are excluded from graph representations of programs. We generalize the target classes by taking the parity of the number of negative edges on bad cycles into account and consider backdoors for such classes. We establish new hardness results and non-uniform polynomial-time tractability relative to directed or undirected cycles.
Solving quantified boolean formulas with circuit observability don't cares Traditionally the propositional part of a Quantified Boolean Formula (QBF) instance has been represented using a conjunctive normal form (CNF). As with propositional satisfiability (SAT), this is motivated by the efficiency of this data structure. However, in many cases, part of or the entire propositional part of a QBF instance can often be represented as a combinational logic circuit. In a logic circuit, the limited observability of the internal signals at the circuit outputs may make their assignments irrelevant for specific assignments of values to other signals in the circuit. This circuit observability don't care (ODC) information has been used to advantage in circuit based SAT solvers. A CNF encoding of the circuit, however, does not capture the signal direction and this limited observability, and thus cannot directly take advantage of this. However, recently it has been shown that this don't care information can be encoded in the CNF description and taken advantage of in a DPLL based SAT solver by modifying the decision heuristics/Boolean constraint propagation/conflict-driven-learning to account for these don't cares. Thus far, however, the use of these don't cares in the CNF encoding has not been explored for QBF solvers. In this paper, we examine how this can be done for QBF solvers as well as evaluate its practical benefits through experimentation. We have developed and implemented the usage of circuit ODCs in various parts of the DPLL-based procedure of the Quaffle QBF solver. We show that DPLL search based QBF solvers can use circuit ODC information to detect satisfying branches earlier during search and make satisfiability directed learning more effective. Our experiments demonstrate that significant performance gain can be obtained by considering circuit ODCs in checking the satisfiability of QBFs.
Proof Systems for Planning Under Cautious Semantics Planning with incomplete knowledge becomes a very active research area since late 1990s. Many logical formalisms introduce sensing actions and conditional plans to address the problem. The action language $$\mathcal{A}_{K}$$ invented by Son and Baral is a well-known framework for this purpose. In this paper, we propose so-called cautious and weakly cautious semantics for $$\mathcal{A}_{K}$$ , in order to allow an agent to generate and execute reliable plans in safety-critical environments. Intuitively speaking, cautious and weakly cautious semantics enable the agent to know exactly what happens after the execution of an action. Computational complexity analysis shows that cautious semantics reduces the reasoning complexity of $$\mathcal{A}_{K}$$ , it is also worth to point out that many useful domains could still be expressed with this setting. Another important contribution of our work is the development of Hoare style proof systems. These proof systems are served as inference mechanisms for the verification of conditional plans, and proved to be sound and complete. In addition, they could also be used for plan generation, in the sense that constructing a derivation is indeed a procedure to finding a plan. We point out that the proof systems posses a nice property for off-line planning, that is, the agent could generate and store short proofs in her spare time, and perform quick plan query by easily constructing a long proof from the stored shorter ones (under the assumption that sufficient proofs are stored).
Bounded universal expansion for preprocessing QBF We present a new approach for preprocessing Quantified Boolean Formulas (QBF) in conjunctive normal form (CNF) by expanding a selection of universally quantified variables with bounded expansion costs. We describe a suitable selection strategy which exploits locality of universals and combines cost estimates with goal orientation by taking into account unit literals which might be obtained. Furthermore, we investigate how Q-resolution can be integrated into this method. In particular, resolution is applied specifically to reduce the amount of copying necessary for universal expansion. Experimental results demonstrate that our preprocessing can successfully improve the performance of state-of-the-art QBF solvers on wellknown problems from the QBFLIB collection.
Asymptotically optimal encodings of conformant planning in QBF The world is unpredictable, and acting intelligently requires anticipating possible consequences of actions that are taken. Assuming that the actions and the world are deterministic, planning can be represented in the classical propositional logic. Introducing nondeterminism (but not probabilities) or several initial states increases the complexity of the planning problem and requires the use of quantified Boolean formulae (QBF). The currently leading logic-based approaches to conditional planning use explicitly or implicitly a QBF with the prefix ∃∀∃. We present formalizations of the planning problem as QBF which have an asymptotically optimal linear size and the optimal number of quantifier alternations in the prefix: ∃∀ and ∀∃. This is in accordance with the fact that the planning problem (under the restriction to polynomial size plans) is on the second level of the polynomial hierarchy, not on the third.
Conjunctive-query containment and constraint satisfaction Conjunctive-query containment is recognized as a fundamental problem in database query evaluation and optimization. At the same time, constraint satisfaction is recognized as a fundamental problem in artificial intelligence. What do conjunctive-query containment and constraint satisfaction have in common? Our main conceptual contribution in this paper is to point out that, despite their very different formulation, conjunctive-query containment and constraint satisfaction are essentially the same problem. The reason is that they can be recast as the following fundamental algebraic problem: given two finite relational structures A and B , is there a homomorphism h : A → B ? As formulated above, the homomorphism problem is uniform in the sense that both relational structures A and B are part of the input. By fixing the structure B , one obtains the following nonuniform problem: given a finite relational structure A , is there a homomorphism h : A → B ? In general, nonuniform tractability results do not uniformize. Thus, it is natural to ask: which tractable cases of nonuniform tractability results for constraint satisfaction and conjunctive-query containment do uniformize? Our main technical contribution in this paper is to show that several cases of tractable nonuniform constraint-satisfaction problems do indeed uniformize. We exhibit three nonuniform tractability results that uniformize and, thus, give rise to polynomial-time solvable cases of constraint satisfaction and conjunctive-query containment. We begin by examining the tractable cases of Boolean constraint-satisfaction problems and show that they do uniformize. This can be applied to conjunctive-query containment via Booleanization ; in particular, it yields one of the known tractable cases of conjunctive-query containment. After this, we show that tractability results for constraint-satisfaction problems that can be expressed using Datalog programs with bounded number of distinct variables also uniformize. Finally, we provide a new proof for the fact that tractability results for queries with bounded treewidth uniformize as well, via a connection with first-order logic with a bounded number of distinct variables.
BDD-based decision procedures for the modal logic K We describe BDD-based decision procedures for the modal log ic K. Our approach is inspired by the automata-theoretic approach, but we avoi d explicit automata construction. Instead, we compute certain fixpoints of a set of types—whichcan be viewed as an on-the-fly emptiness of the automaton. We use BDDs to represent and mani pulate such type sets, and investigate different kinds of representations as well as a"level-based" representation scheme. The latter turns out to speed up construction and reduce memo ry consumption considerably. We also study the effect of formula simplification on our deci sion procedures. To proof the viability of our approach, we compare our approach with a rep resentative selection of other approaches, including a translation ofK to QBF. Our results indicate that the BDD-based approach dominates for modally heavy formulae, while searc h-based approaches dominate for propositionally heavy formulae.
Formalizing narratives using nested circumscription Abstract Representing and reasoning about narratives together with the ability to do hypothetical reasoning is important for agents in a dynamic,world. These agents need to record their observations and action executions as a narrative and at the same time, to achieve their goals against a changing environment, they need to make,plans (or re-plan) from the current situation. The early action formalisms did one or the other. For example, while the original situation calculus was meant for hypothetical reasoning and planning, the event calculus was more appropriate for narratives. Recently, there have been some attempts at developing formalisms that do both. Independently, there has also been a lot of recent research in reasoning about actions using circumscription. Of particular interest to us is the research on using high-level languages,and their logical representation using nested abnormality,theories (NATs)—a form,of circumscription with blocks that make,knowledge,representation modular. Starting from theories in the high-level languageL, which is extended to allow concurrent actions, we define a translation to NATs that preserves both narrative and hypothetical reasoning. We initially use the high level languageL, and then extend it to allow concurrent actions. In the process, we study several knowledge representation issues such as filtering, and restricted monotonicity with respect to NATs. Finally, we compare our formalization with other approaches, and discuss how our use of NATs makes it easier to incorporate other features of action theories, such as constraints, to our formalization. © 1998 Elsevier Science B.V. All rights reserved. Keywords: Narratives; Nested abnormality theories; Circumscription; Reasoning about actions; Value
Storage Technology: RAID and Beyond
Natural language processing with modular pdp networks and distributed lexicon An approach to connectionist natural language processing is proposed, which is based on hierarchically organized modular parallel distributed processing (PDP) networks and a central lexicon of distributed input/output representations. The modules communicate using these representations, which are global and publicly available in the system. The representations are developed automatically by all networks while they are learning their processing tasks. The resulting representations reflect the regularities in the subtasks, which facilitates robust processing in the face of noise and damage, supports improved generalization, and provides expectations about possible contexts. The lexicon can be extended by cloning new instances of the items, that is, by generating a number of items with known processing properties and distinct identities. This technique combinatorially increases the processing power of the system. The recurrent FGREP module, together with a central lexicon, is used as a basic building block in modeling higher level natural language tasks. A single module is used to form case-role representations of sentences from word-by-word sequential natural language input. A hierarchical organization of four recurrent FGREP modules (the DISPAR system) is trained to produce fully expanded paraphrases of script-based stories, where unmentioned events and role fillers are inferred.
Automated Tuning of Parallel I/O Systems: An Approach to Portable I/O Performance for Scientific Applications Parallel I/O systems typically consist of individual processors, communication networks, and a large number of disks. Managing and utilizing these resources to meet performance, portability, and usability goals of high-performance scientific applications has become a significant challenge. For scientists, the problem is exacerbated by the need to retune the I/O portion of their code for each supercomputer platform where they obtain access. We believe that a parallel I/O system that automatically selects efficient I/O plans for user applications is a solution to this problem. In this paper, we present such an approach for scientific applications performing collective I/O requests on multidimensional arrays. Under our approach, an optimization engine in a parallel I/O system selects high-quality I/O plans without human intervention, based on a description of the application I/O requests and the system configuration. To validate our hypothesis, we have built an optimizer that uses rule-based and randomized search-based algorithms to tune parameter settings in Panda, a parallel I/O library for multidimensional arrays. Our performance results obtained from an IBM SP using an out-of-core matrix multiplication application show that the Panda optimizer is able to select high-quality I/O plans and deliver high performance under a variety of system configurations with a small total optimization overhead.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.025698
0.029406
0.022348
0.020956
0.02
0.007773
0.002752
0.000967
0.000105
0.000006
0
0
0
0
The satanic notations: counting classes beyond #P and other definitional adventures We explore the potentially "off-by-one" nature of the definitions of counting (#P versus #NP), difference (DP versus DNP), and unambiguous (UP versus UNP; FewP versus FewNP) classes, and make suggestions as to logical approaches in each case. We discuss the strangely differing representations that oracle and predicate models give for counting classes, and we survey the properties of counting classes beyond #P. We ask whether subtracting a #P function from a P function it is no greater than necessarily yields a #P function.
Sets of boolean connectives that make argumentation easier Many proposals for logic-based formalizations of argumentation consider an argument as a pair (φ, α), where the support φ is understood as a minimal consistent subset of a given knowledge base which has to entail the claim α. In most scenarios, arguments are given in the full language of classical propositional logic which makes reasoning in such frameworks a computationally costly task. For instance, the problem of deciding whether there exists a support for a given claim has been shown to be Σ2P-complete. In order to better understand the sources of complexity (and to identify tractable fragments), we focus on arguments given over formulae in which the allowed connectives are taken from certain sets of Boolean functions. We provide a complexity classification for four different decision problems (existence of a support, checking the validity of an argument, relevance and dispensability) with respect to all possible sets of Boolean functions.
A complexity theory for feasible closure properties The study of the complexity of sets encompasses two complementary aims: (1) establishing—usually via explicit construction of algorithms-that sets are feasible, and (2) studying the relative complexity of sets that plausibly might be feasible but are not currently known to be feasible (such as the NP-complete sets and the PSPACE-complete sets). For the study of the complexity of closure properties, a recent flurry of results has established an analog of (1); these papers explicitly demonstrate many closure properties possessed by PP and C = P (and the proofs implicitly give closure properties of the function class #P). The present paper presents and develops, for function classes such as #P, SpanP, OptP, and MidP, an analog of (2): a general theory of the complexity of closure properties. In particular, we show that subtraction is hard for the closure properties of each of these classes: each is closed under subtraction if and only if it is closed under every polynomial-time operation. Previously, no property—natural or unnatural—had been known to have this behavior. We also prove other natural operations hard for the closure properties of #P, SpanP, OptP, and MidP, and we explore the relative complexity of operations that seem not to be # P-hard, such as maximum, minimum, decrement, and median. Moreover, for each of #P, SpanP, OptP, and MidP, we give a natural complete characterization—in terms of the collapse of complexity classes—of the conditions under which that class has every feasible closure property.
NP is as easy as detecting unique solutions For all known NP-complete problems the number of solutions in instances having solutions may vary over an exponentially large range. Furthermore, most of the well-known ones, such as satisfiability, are parsimoniously interreducible, and these can have any number of solutions between zero and an exponentially large number. It is natural to ask whether the inherent intractability of NP-complete problems is caused by this wide variation. In this paper we give a negative answer to this using randomized reductions. We show that the problems of distinguishing between instances of SAT having zero or one solution, or finding solutions to instances of SAT having unique solutions, are as hard as SAT itself. Several corollaries about the difficulty of specific problems follow. For example if the parity of the number of solutions of SAT can be computed in RP then NP = RP. Some further problems can be shown to be hard for NP or DP via randomized reductions.
On the complexity of database queries We revisit the issue of the complexity of database queries, in the light of the recent parametric refinement of com- plexity theory. We show that, if the query size (or the number of variables in the query) is considered as a parameter, then the relational calculus and its frag- ments (conjunctive queries, positive queries) are classi- fied at appropriate levels of the so-called W hierarchy of Downey and Fellows. These results strongly suggest that the query size is inherently in the exponent of the data complexity of any query evaluation algorithm, with the implication becoming stronger as the expressibility of the query language increases. For recursive languages (fixpoint logic, Datalog) this is provably the case (14). On the positive side, we show that this exponential de- pendence can be avoided for the extension of acyclic queries with # (but not <) inequalities.
On the Desirability of Acyclic Database Schemes A class of database schemes, called acychc, was recently introduced. It is shown that this class has a number of desirable properties. In particular, several desirable properties that have been studied by other researchers m very different terms are all shown to be eqmvalent to acydicity. In addition, several equivalent charactenzauons of the class m terms of graphs and hypergraphs are given, and a smaple algorithm for determining acychclty is presented. Also given are several eqmvalent characterizations of those sets M of multivalued dependencies such that M is the set of muRlvalued dependencies that are the consequences of a given join dependency. Several characterizations for a conflict-free (in the sense of Lien) set of muluvalued dependencies are provided.
Conjunctive-query containment and constraint satisfaction Conjunctive-query containment is recognized as a fundamental problem in database query evaluation and optimization. At the same time, constraint satisfaction is recognized as a fundamental problem in artificial intelligence. What do conjunctive-query containment and constraint satisfaction have in common? Our main conceptual contribution in this paper is to point out that, despite their very different formulation, conjunctive-query containment and constraint satisfaction are essentially the same problem. The reason is that they can be recast as the following fundamental algebraic problem: given two finite relational structures A and B , is there a homomorphism h : A → B ? As formulated above, the homomorphism problem is uniform in the sense that both relational structures A and B are part of the input. By fixing the structure B , one obtains the following nonuniform problem: given a finite relational structure A , is there a homomorphism h : A → B ? In general, nonuniform tractability results do not uniformize. Thus, it is natural to ask: which tractable cases of nonuniform tractability results for constraint satisfaction and conjunctive-query containment do uniformize? Our main technical contribution in this paper is to show that several cases of tractable nonuniform constraint-satisfaction problems do indeed uniformize. We exhibit three nonuniform tractability results that uniformize and, thus, give rise to polynomial-time solvable cases of constraint satisfaction and conjunctive-query containment. We begin by examining the tractable cases of Boolean constraint-satisfaction problems and show that they do uniformize. This can be applied to conjunctive-query containment via Booleanization ; in particular, it yields one of the known tractable cases of conjunctive-query containment. After this, we show that tractability results for constraint-satisfaction problems that can be expressed using Datalog programs with bounded number of distinct variables also uniformize. Finally, we provide a new proof for the fact that tractability results for queries with bounded treewidth uniformize as well, via a connection with first-order logic with a bounded number of distinct variables.
The design of POSTGRES This paper presents the preliminary design of a new database management system, called POSTGRES, that is the successor to the INGRES relational database system. The main design goals of the new system are toprovide better support for complex objects,provide user extendibility for data types, operators and access methods,provide facilities for active databases (i.e., alerters and triggers) and inferencing including forward- and backward-chaining,simplify the DBMS code for crash recovery,produce a design that can take advantage of optical disks, workstations composed of multiple tightly-coupled processors, and custom designed VLSI chips, andmake as few changes as possible (preferably none) to the relational model.The paper describes the query language, programming language interface, system architecture, query processing strategy, and storage system for the new system.
B-tree indexes for high update rates In some applications, data capture dominates query processing. For example, monitoring moving objects often requires more insertions and updates than queries. Data gathering using automated sensors often exhibits this imbalance. More generally, indexing streams is considered an unsolved problem.For those applications, B-tree indexes are good choices if some trade-off decisions are tilted towards optimization of updates rather than towards optimization of queries. This paper surveys some techniques that let B-trees sustain very high update rates, up to multiple orders of magnitude higher than traditional B-trees, at the expense of query processing performance. Not surprisingly, some of these techniques are reminiscent of those employed during index creation, index rebuild, etc., while other techniques are derived from well known technologies such as differential files and log-structured file systems.
An Introduction to MCMC for Machine Learning This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.
Actions and specificity A solution to the problem of speciflcity in a resource{oriented deductive approach to actions and change is presented. Speciflcity originates in the problem of overloading methods in object oriented frameworks but can be observed in general applications of actions and change in logic. We give a uniform solution to the problem of speciflcity culminating in a completed equational logic program with an equational theory. We show the soundness and completeness of SLDENF{resolution, ie. SLD{resolution augmented by negation{as{failure and by an equational theory, wrt the completed program. Finally, the expressiveness of our approach for performing general reasoning about actions, change, and causality is demonstrated.
Phoenix: a safe in-memory file system Phoenix contains two timestamped versions of the in-memory file system allowing for a reserve version that ensures safety for diskless computers with battery-powered memeory.
Scheduling parallel I/O operations The I/O bottleneck in parallel computer systems has recently begun receiving increasing interest. Most attention has focused on improving the performance of I/O devices using fairly low-level parallelism in techniques such as disk striping and interleaving. Widely applicable solutions, however, will require an integrated approach which addresses the problem at multiple system levels, including applications, systems software, and architecture. We propose that within the context of such an integrated approach, scheduling parallel I/O operations will become increasingly attractive and can potentially provide substantial performance benefits.We describe a simple I/O scheduling problem and present approximate algorithms for its solution. The costs of using these algorithms in terms of execution time, and the benefits in terms of reduced time to complete a batch of I/O operations, are compared with the situations in which no scheduling is used, and in which an optimal scheduling algorithm is used. The comparison is performed both theoretically and experimentally. We have found that, in exchange for a small execution time overhead, the approximate scheduling algorithms can provide substantial improvements in I/O completion times.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.071111
0.08
0.033333
0.002899
0.000674
0.00023
0.00006
0
0
0
0
0
0
0
An approach for extracting a small unsatisfiable core The article addresses the problem of finding a small unsatisfiable core of an unsatisfiable CNF formula. The proposed algorithm, CoreTrimmer, iterates over each internal node d in the resolution graph that `consumes' a large number of clauses M (i.e., a large number of original clauses are present in the unsat core with the sole purpose of proving d) and attempts to prove them without the M clauses. If this is possible, it transforms the resolution graph into a new graph that does not have the M clauses at its core. CoreTrimmer can be integrated into a fixpoint framework similarly to Malik and Zhang's fix-point algorithm run_till_ fix. We call this option trim_till_fix. Experimental evaluation on a large number of industrial CNF unsatisfiable formulas shows that trim_till_fix doubles, on average, the number of reduced clauses in comparison to run_till_fix. It is also better when used as a component in a bigger system that enforces short timeouts.
A branch and bound algorithm for extracting smallest minimal unsatisfiable subformulas Explaining the causes of infeasibility of Boolean formulas has practical applications in numerous fields, such as artificial intelligence (repairing inconsistent knowledge bases), formal verification (abstraction refinement and unbounded model checking), and electronic design (diagnosing and correcting infeasibility). Minimal unsatisfiable subformulas (MUSes) provide useful insights into the causes of infeasibility. An unsatisfiable formula often has many MUSes. Based on the application domain, however, MUSes with specific properties might be of interest. In this paper, we tackle the problem of finding a smallest-cardinality MUS (SMUS) of a given formula. An SMUS provides a succinct explanation of infeasibility and is valuable for applications that are heavily affected by the size of the explanation. We present (1) a baseline algorithm for finding an SMUS, founded on earlier work for finding all MUSes, and (2) a new branch-and-bound algorithm called Digger that computes a strong lower bound on the size of an SMUS and splits the problem into more tractable subformulas in a recursive search tree. Using two benchmark suites, we experimentally compare Digger to the baseline algorithm and to an existing incomplete genetic algorithm approach. Digger is shown to be faster in nearly all cases. It is also able to solve far more instances within a given runtime limit than either of the other approaches.
Boosting minimal unsatisfiable core extraction A variety of tasks in formal verification require finding small or minimal unsatisfiable cores (subsets) of an unsatisfiable set of constraints. This paper proposes two algorithms for finding a minimal unsatisfiable core or, if a time-out occurs, a small non-minimal unsatisfiable core. Our algorithms can be applied to either standard clause-level unsatisfiable core extraction or high-level unsatisfiable core extraction, that is, an extraction of an unsatisfiable core in terms of “interesting” propositional constraints supplied by the user application. We demonstrate that one of our algorithms outperforms existing algorithms for clause-level minimal unsatisfiable core extraction on large well-known industrial benchmarks. We also show that our algorithms are highly scalable for the problem of high-level minimal unsatisfiable core extraction on huge benchmarks generated by Intel's proof-based abstraction refinement flow. In addition, we provide a comparative analysis of the impact of various algorithms on unsatisfiable core extraction.
A branch-and-bound algorithm for extracting smallest minimal unsatisfiable formulas We tackle the problem of finding a smallest-cardinality MUS (SMUS) of a given formula. The SMUS provides a succinct explanation of infeasibility and is valuable for applications that rely on such explanations. We present a branch-and-bound algorithm that utilizes iterative MAXSAT solutions to generate lower and upper bounds on the size of the SMUS, and branch on specific subformulas to find it. We report experimental results on formulas from DIMACS and DaimlerChrysler product configuration suites.
On Approaches to Explaining Infeasibility of Sets of Boolean Clauses These last years, the issue of locating and explaining contradictions inside sets of propositional clauses has received a renewed attention due to the emergence of very efficient SAT solvers. In case of inconsistency, many such solvers merely conclude that no solution exists or provide an upper approximation of the subset of clauses that are contradictory. However, in most application domains, only knowing that a problem does not admit any solution is not enough informative, and it is important to know which clauses are actually conflicting. In this paper, the focus is on the concept of minimally unsatisfiable subformulas (MUSes), which explain logical inconsistency in terms of minimal sets of contradictory clauses. Specifically, various recent results and computational approaches about MUSes and related concepts are discussed.
Detecting Inconsistencies in Large Biological Networks with Answer Set Programming We introduce an approach to detecting inconsistencies in large biological networks by using Answer Set Programming. To this end, we build upon a recently proposed notion of consistency between biochemical/genetic reactions and high-throughput profiles of cell activity. We then present an approach based on Answer Set Programming to check the consistency of large-scale data sets. Moreover, we extend this methodology to provide explanations for inconsistencies in the data by determining minimal representations of conflicts. In practice, this can be used to identify unreliable data or to indicate missing reactions.
Categorisation of clauses in conjunctive normal forms: minimally unsatisfiable sub-clause-sets and the lean kernel Finding out that a SAT problem instance F is unsatisfiable is not enough for applications, where good reasons are needed for explaining the inconsistency (so that for example the inconsistency may be repaired). Previous attempts of finding such good reasons focused on finding some minimally unsatisfiable sub-clause-set F' of F, which in general suffers from the non-uniqueness of F' (and thus it will only find some reason, albeit there might be others). In our work, we develop a fuller approach, enabling a more fine-grained analysis of necessity and redundancy of clauses, supported by meaningful semantical and proof-theoretical characterisations. We combine known techniques for searching and enumerating minimally unsatisfiable sub-clause-sets with (full) autarky search. To illustrate our techniques, we give a detailed analysis of well-known industrial problem instances.
Extremal problems in logic programming and stable model computation We study the following problem: given a class of logic programs ¢, determine the maximum number of stable models of a program from ©. We establish the maximum for the class of all logic programs with at most n clauses, and for the class of all logic programs of size at most n. We also characterize the programs for which the maxima are attained. We obtained similar results for the class of all disjunctive logic programs with at most n clauses, each of length at most m, and for the class of all disjunctive logic programs of size at most n. Our results on logic programs have direct implication for the design of algorithms to compute stable models. Several such algorithms, similar in spirit to the Davis-Putnam procedure, are described in the paper. Our results imply that there is an algorithm that finds all stable models of a program with n clauses after considering the search space of size O(3n/3) in the worst case. Our results also provide some insights into the question of representability of families of sets as families of stable models of logic programs.
Near-Optimal Plans, Tractability, and Reactivity Many planning problems have recently beenshown to be inherently intractable. For example,finding the shortest plan in the blocksworlddomain is NP-hard, and so is planningin even some of the most limited STRIPSstyleplanning formalisms. We explore thequestion as to what extent these negative resultscan be attributed to the insistence onfinding plans of minimal length.Using recent results form the theory of combinatorialoptimization, we show that fordomain-independent planning, one...
Weighted voting for replicated data In a new algorithm for maintaining replicated data, every copy of a replicated file is assigned some number of votes. Every transaction collects a read quorum of rvotes to read a file, and a write quorum of wvotes to write a file, such that r+w is greater than the total number of votes assigned to the file. This ensures that there is a non-null intersection between every read quorum and every write quorum. Version numbers make it possible to determine which copies are current. The reliability and performance characteristics of a replicated file can be controlled by appropriately choosing r, w, and the file's voting configuration. The algorithm guarantees serial consistency, admits temporary copies in a natural way by the introduction of copies with no votes, and has been implemented in the context of an application system called Violet.
Performance of parallel I/O scheduling strategies on a network of workstations Techniques for scheduling parallel I/O for both uniprogrammed systems that run single jobs in isolation and multiprogrammed environments that execute multiple parallel jobs simultaneously are presented. The performance of the scheduling algorithms is evaluated on a network of workstations. A new scheduling algorithm proposed in this paper is observed to perform very well for systems running single jobs in isolation. The algorithms that use knowledge of job characteristics are observed to produce a superior performance in multiprogrammed parallel environments
Adaptive Prefetching and Storage Reorganization In A Log-Structured Storage System We present a storage management system that has the ability to adapt to the data access characteristics of the application that uses it based on collection and analysis of runtime statistics. This feature is especially useful in the storage management layer of database systems, where applications exhibit relatively predictable access patterns. Adaptive reorganization is performed by the storage management system in a manner that optimizes the access patterns of the system for which it is used. We enhance the log-structured storage system that naturally caters for write optimization, with the addition of a statistics collection mechanism to determine data access patterns of applications. The storage system can serve as a testbed for a variety of statistics analysis and clustering mechanisms. Higher level application-specific data clustering mechanisms can be used to override the storage system's low-level clustering mechanisms. In addition, the analysis techniques and reorganization scheme can be used in other storage systems. Performance results from our prototype show potential response time speedups of up to 83 percent over the basic log-structured file system in the best case, using a combination of storage reorganization and prefetching.
Global Reinforcement Learning in Neural Networks with Stochastic Synapses We have found a more general formulation of the REINFORCE learning principle which had been proposed by R. J. Williams for the case of artificial neural networks with stochastic cells ("Boltzmann machines"). This formulation has enabled us to apply the principle to global reinforcement learning in networks with deterministic neural cells but stochastic synapses, and to suggest two groups of new learning rules for such networks, including simple local rules. Numerical simulations have shown that at least for several popular benchmark problems one of the new learning rules may provide results on a par with the best known global reinforcement techniques.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.207569
0.044607
0.036564
0.016377
0.004112
0.00211
0.000572
0.000008
0
0
0
0
0
0
On Computing Belief Change Operations using Quantified Boolean Formulas In this paper, we show how an approach to belief revision and belief contraction can be axiomatized by means of quantified Boolean formulas. Specifically, we consider the approach of belief change scenarios, a general framework that has been introduced for expressing different forms of belief change. The essential idea is that for a belief change scenario (K, R, C), the set of formulas K, representing the knowledge base, is modified so that the sets of formulas R and C are respectively true in, and consistent with the result. By restricting the form of a belief change scenario, one obtains specific belief change operators including belief revision, contraction, update, and merging. For both the general approach and for specific operators, we give a quantified Boolean formula such that satisfying truth assignments to the free variables correspond to belief change extensions in the original approach. Hence, we reduce the problem of determining the results of a belief change operation to that of satisfiability. This approach has several benefits. First, it furnishes an axiomatic specification of belief change with respect to belief change scenarios. This then leads to further insight into the belief change framework. Second, this axiomatization allows us to identify strict complexity bounds for the considered reasoning tasks. Third, we have implemented these different forms of belief change by means of existing solvers for quantified Boolean formulas. As well, it appears that this approach may be straightforwardly applied to other specific approaches to belief change.
Computational properties of argument systems satisfying graph-theoretic constraints One difficulty that arises in abstract argument systems is that many natural questions regarding argument acceptability are, in general, computationally intractable having been classified as complete for classes such as np, co-np, and . In consequence, a number of researchers have considered methods for specialising the structure of such systems so as to identify classes for which efficient decision processes exist. In this paper the effect of a number of graph-theoretic restrictions is considered: k-partite systems (k≥2) in which the set of arguments may be partitioned into k sets each of which is conflict-free; systems in which the numbers of attacks originating from and made upon any argument are bounded; planar systems; and, finally, those of k-bounded treewidth. For the class of bipartite graphs, it is shown that determining the acceptability status of a specific argument can be accomplished in polynomial-time under both credulous and sceptical semantics. In addition we establish the existence of polynomial time methods for systems having bounded treewidth when deciding the following: whether a given (set of) arguments is credulously accepted; if the system has a non-empty preferred extension; has a stable extension; is coherent; has at least one sceptically accepted argument. In contrast to these positive results, however, deciding whether an arbitrary set of arguments is "collectively acceptable" remains NP-complete in bipartite systems. Furthermore for both planar and bounded degree systems the principal decision problems are as hard as the unrestricted cases. In deriving these latter results we introduce various concepts of "simulating" a general argument system by a restricted class so allowing any argument system to be translated to one which has both bounded degree and is planar. Finally, for the development of basic argument systems to so-called "value-based frameworks", we present results indicating that decision problems known to be intractable in their most general form remain so even under quite severe graph-theoretic restrictions. In particular the problem of deciding "subjective acceptability" continues to be NP-complete even when the underlying graph is a binary tree.
Reasoning in Argumentation Frameworks Using Quantified Boolean Formulas This paper describes a generic approach to implement propositional argumentation frameworks by means of quantified Boolean formulas (QBFs). The motivation to this work is based on the following observations: Firstly, depending on the underlying deductive system and the chosen semantics (i.e., the kind of extension under consideration), reasoning in argumentation frameworks can become computationally involving up to the fourth level of the polynomial hierarchy. This makes the language of QBFs a suitable target formalism since decision problems from the polynomial hierarchy can be efficiently represented in terms of QBFs. Secondly, several practicably efficient solvers for QBFs are currently available, and thus can be used as black-box engines in potential implementations of argumentation frameworks. Finally, the definition of suitable QBF modules provides us with a tool box in order to capture a broad range of reasoning tasks associated to formal argumentation.
On deciding subsumption problems Subsumption is an important redundancy elimination method in automated deduction. A clause D is subsumed by a set $$\\mathcal{C}$$ of clauses if there is a clause C ¿ $$\\mathcal{C}$$ and a substitution ¿ such that the literals of C¿ are included in D. In the field of automated model building, subsumption has been modified to an even stronger redundancy elimination method, namely the so-called clausal H-subsumption. Atomic H-subsumption emerges from clausal H-subsumption by restricting D to an atom and $$\\mathcal{C}$$ to a set of atoms. Both clausal and atomic H-subsumption play an indispensable key role in automated model building. Moreover, problems equivalent to atomic H-subsumption have been studied with different terminologies in many areas of computer science. Both clausal and atomic H-subsumption are known to be intractable, i.e., ¿ p 2 -complete and NP-complete, respectively. In this paper, we present a new approach to deciding (clausal and atomic) H-subsumption that is based on a reduction to QSAT2 and SAT, respectively.
Comparing Different Prenexing Strategies for Quantified Boolean Formulas The majority of the currently available solvers for quantified Boolean formulas (QBFs) process input formulas only in prenex conjunctive normal form. However, the natural representation of practicably relevant problems in terms of QBFs usually results in formulas which are not in a specific normal form. Hence, in order to evaluate such QBFs with available solvers, suitable normal-form translations are required. In this paper, we report experimental results comparing different prenexing strategies on a class of structured benchmark problems. The problems under consideration encode the evaluation of nested counterfactuals over a propositional knowledge base, and span the entire polynomial hierarchy. The results show that different prenexing strategies influence the evaluation time in different ways across different solvers. In particular, some solvers are robust to the chosen strategies while others are not.
Solving Advanced Reasoning Tasks Using Quantified Boolean Formulas We consider the compilation of different reasoning tasks into the evaluation problem of quantified boolean formulas (QBFs) as an approach to develop prototype reasoning sys- tems useful for, e.g., experimental purposes. Such a method is a natural generalization of a similar technique applied to NP-problems and has been recently proposed by other re- searchers. More specifically, we present translations of sev- eral well-known reasoning tasks from the area of nonmono- tonic reasoning into QBFs, and compare their implementa- tion in the prototype system QUIP with established NMR- provers. The results show reasonable performance, and docu- ment that the QBF approach is an attractive tool for rapid pro- totyping of experimental knowledge-representation systems.
BDD-based decision procedures for the modal logic K We describe BDD-based decision procedures for the modal log ic K. Our approach is inspired by the automata-theoretic approach, but we avoi d explicit automata construction. Instead, we compute certain fixpoints of a set of types—whichcan be viewed as an on-the-fly emptiness of the automaton. We use BDDs to represent and mani pulate such type sets, and investigate different kinds of representations as well as a"level-based" representation scheme. The latter turns out to speed up construction and reduce memo ry consumption considerably. We also study the effect of formula simplification on our deci sion procedures. To proof the viability of our approach, we compare our approach with a rep resentative selection of other approaches, including a translation ofK to QBF. Our results indicate that the BDD-based approach dominates for modally heavy formulae, while searc h-based approaches dominate for propositionally heavy formulae.
Complexity in Value-Based Argument Systems We consider a number of decision problems formulated in value-based argumentation frameworks (VAFs), a development of Dung's argument systems in which arguments have associated abstract values which are considered relative to the orderings induced by the opinions of specific audiences. In the context of a single fixed audience, it is known that those decision questions which are typically computationally hard in the standard setting admit efficient solution methods in the value-based setting. In this paper we show that, in spite of this positive property, there still remain a number of natural questions that arise solely in value-based schemes for which there are unlikely to be efficient decision processes.
Extremal problems in logic programming and stable model computation We study the following problem: given a class of logic programs ¢, determine the maximum number of stable models of a program from ©. We establish the maximum for the class of all logic programs with at most n clauses, and for the class of all logic programs of size at most n. We also characterize the programs for which the maxima are attained. We obtained similar results for the class of all disjunctive logic programs with at most n clauses, each of length at most m, and for the class of all disjunctive logic programs of size at most n. Our results on logic programs have direct implication for the design of algorithms to compute stable models. Several such algorithms, similar in spirit to the Davis-Putnam procedure, are described in the paper. Our results imply that there is an algorithm that finds all stable models of a program with n clauses after considering the search space of size O(3n/3) in the worst case. Our results also provide some insights into the question of representability of families of sets as families of stable models of logic programs.
On linear characterizations of combinatorial optimization problems We show that there can be no computationally tractable description by linear inequalities of the polyhedron associated with any NP-complete combinatorial optimization problem unless NP = co-NP -- a very unlikely event. We also apply the ellipsoid method for linear programming to show that a combinatorial optimization problem is solvable in polynomial time if and only if it admits a small generator of violated inequalities.
Disk scheduling in a multimedia I/O system This article provides a retrospective of our original paper by the same title in the Proceedings of the First ACM Conference on Multimedia, published in 1993. This article examines the problem of disk scheduling in a multimedia I/O system. In a multimedia server, the disk requests may have constant data rate requirements and need guaranteed service. We propose a new scheduling algorithm, SCAN-EDF, that combines the features of SCAN type of seek optimizing algorithm with an Earliest Deadline First (EDF) type of real-time scheduling algorithm. We compare SCAN-EDF with other scheduling strategies and show that SCAN-EDF combines the best features of both SCAN and EDF. We also investigate the impact of buffer space on the maximum number of video streams that can be supported.We show that by making the deadlines larger than the request periods, a larger number of streams can be supported.We also describe how we extended the SCAN-EDF algorithm in the PRISM multimedia architecture. PRISM is an integrated multimedia server, designed to satisfy the QOS requirements of multiple classes of requests. Our experience in implementing the extended SCAN-EDF algorithm in a generic operating system is discussed and performance metrics and results are presented to illustrate how the SCAN-EDF extensions and implementation strategies have succeeded in meeting the QOS requirements of different classes of requests.
I/O reference behavior of production database workloads and the TPC benchmarks—an analysis at the logical level As improvements in processor performance continue to far outpace improvements in storage performance, I/O is increasingly the bottleneck in computer systems, especially in large database systems that manage huge amoungs of data. The key to achieving good I/O performance is to thoroughly understand its characteristics. In this article we present a comprehensive analysis of the logical I/O reference behavior of the peak productiondatabase workloads from ten of the world's largest corporations. In particular, we focus on how these workloads respond to different techniques for caching, prefetching, and write buffering. Our findings include several broadly applicable rules of thumb that describe how effective the various I/O optimization techniques are for the production workloads. For instance, our results indicate that the buffer pool miss ratio tends to be related to the ratio of buffer pool size to data size by an inverse square root rule. A similar fourth root rule relates the write miss ratio and the ration of buffer pool size to data size.In addition, we characterize the reference characteristics of workloads similar to the Transaction Processing Performance Council (TPC) benchmarks C (TPC-C) and D(TPC-D), which are de facto standard performance measures for online transaction processing (OLTP) systems and decision support systems (DSS), respectively. Since benchmarks such as TPC-C and TPC-D can only be used effectively if their strengths and limitations are understood, a major focus of our analysis is to identify aspects of the benchmarks that stress the system differently than the production workloads. We discover that for the most part, the reference behavior of TPC-C and TPC-D fall within the range of behavior exhibited by the production workloads. However, there are some noteworthy exceptions that affect well-known I/O optimization techniques such as caching (LRU is further from the optimal for TPC-C, while there is little sharing of pages between transactions for TPC-D), prefetching (TPC-C exhibits no significant sequentiality), and write buffering (write buffering is lees effective for the TPC benchmarks). While the two TPC benchmarks generally complement one another in reflecting the characteristics of the production workloads, there remain aspects of the real workloads that are not represented by either of the benchmarks.
Case-Based Support for the Design of Dynamic System Requirements Using formal specifications based on varieties of mathematical logic is becoming common in the process of designing and implementing software. Formal methods are usually intended to include all important de- tails of the final system in the specification with the aim of proving that it possesses certain mathematical properties. In large, complex systems, this task requires sophisticated theorem proving, which can be difficult and complicated. Telecommunication systems are large and complex, making detailed formal specification impractical with current technology. However roughly formal "sketches" of the behaviours these services provide can be produced, and these can be very helpful in locating which service might be relevant to a given problem. Our case-based approach uses coarse-grained requirements specification sketches to outline the basic behaviour of the system's functional modules (called services), thereby allowing us to iden- tify, reuse and adapt requirements (from cases stored in a library) to construct new cases. By using cases that have already been tested, integrated and im- plemented, less effort is needed to produce requirements specifications on a large scale. Using a hypothetical telecommunication system as our example, we shall show how comparatively simple logic can be used to capture coarse- grained behaviour and how a case-based approach benefits from this. The in- put from the examples is used both to identify the cases whose behaviour corresponds most closely to the designer's intentions and to adapt and finally verify the proposed solution against the examples.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.113583
0.06
0.06
0.046151
0.013422
0.004097
0.001569
0.000333
0.000014
0
0
0
0
0
A Prospect-Guided global query expansion strategy using word embeddings. •Global query semantics modeled from the standpoint of prospect vocabulary terms.•Selective semantic exploration strategy adds new terms related to more relevant topics.•Disambiguation issues addressed without exogenous resources.•Significant results improving both recall and precision metrics without relevance feedback.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Bayesian model and dimension reduction for uncertainty propagation: applications in random media Well-established methods for the solution of stochastic partial differential equations (SPDEs) typically struggle in problems with high-dimensional inputs/outputs. Such difficulties are only amplified in large-scale applications where even a few tens of full-order model runs are impracticable. While dimensionality reduction can alleviate some of these issues, it is not known which and how many features of the (high-dimensional) input are actually predictive of the (high-dimensional) output. In this paper, we advocate a Bayesian formulation that is capable of performing simultaneous dimension and model-order reduction. It consists of a component that encodes the high-dimensional input into a low-dimensional set of feature functions by employing sparsity-inducing priors and a decoding component that makes use of the solution of a coarse-grained model in order to reconstruct that of the full-order model. Both components are represented with latent variables in a probabilistic graphical model and are simultaneously trained using Stochastic Variational Inference methods. The model is capable of quantifying the predictive uncertainty due to the information loss that unavoidably takes place in any model-order/dimension reduction as well as the uncertainty arising from finite-sized training datasets. We demonstrate its capabilities in the context of random media where fine-scale fluctuations can give rise to random inputs with tens of thousands of variables. With a few tens of full-order model simulations, the proposed model is capable of identifying salient physical features and producing sharp predictions under different boundary conditions of the full output which itself consists of thousands of components.
Deep Learning in Bioinformatics. In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-theart performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies.
A fast learning algorithm for deep belief nets. We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Logic Programming and Negation: A Survey. We survey here various approaches which were proposed to incorporate negation in logicprograms. We concentrate on the proof-theoretic and model-theoretic issues and the relationshipsbetween them.1991 Mathematics Subject Classification: 68Q40, 68T15.CR Categories: F.3.2., F.4.1, H.3.3, I.2.3.Keywords and Phrases: negation, general logic programs, non-monotonic reasoning.Notes. The work of the first author was partly supported by ESPRIT Basic Research Action6810 (Compulog 2). The work...
A sufficient condition for backtrack-bounded search Backtrack search is often used to solve constraint satisfaction problems. A relationship involving the structure of the constraints is described that provides a bound on the backtracking required to advance deeper into the backtrack tree. This analysis leads to upper bounds on the effort required for solution of a class of constraint satisfaction problems. The solutions involve a combination of relaxation preprocessing and backtrack search. The bounds are expressed in terms of the structure of the constraint connections. Specifically, the effort is shown to have a bound exponential in the size of the largest biconnected component of the constraint graph, as opposed to the size of the graph as a whole.
Convergence of a Nonconforming Multiscale Finite Element Method The multiscale finite element method (MsFEM) [T. Y. Hou, X. H. Wu, and Z. Cai, Math. Comp., 1998, to appear; T. Y. Hou and X. H. Wu, J. Comput. Phys., 134 (1997), pp. 169--189] has been introduced to capture the large scale solutions of elliptic equations with highly oscillatory coefficients. This is accomplished by constructing the multiscale base functions from the local solutions of the elliptic operator. Our previous study reveals that the leading order error in this approach is caused by the ``resonant sampling,'' which leads to large error when the mesh size is close to the small scale of the continuous problem. Similar difficulty also arises in numerical upscaling methods. An oversampling technique has been introduced to alleviate this difficulty [T. Y. Hou and X. H. Wu, J. Comput. Phys., 134 (1997), pp. 169--189]. A consequence of the oversampling method is that the resulting finite element method is no longer conforming. Here we give a detailed analysis of the nonconforming error. Our analysis also reveals a new cell resonance error which is caused by the mismatch between the mesh size and the wavelength of the small scale. We show that the cell resonance error is of lower order. Our numerical experiments demonstrate that the cell resonance error is generically small and is difficult to observe in practice.
Efficient sparse coding algorithms Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that cap- ture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimiza- tion problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field sur- round suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.
Synchronized Disk Interleaving A group of disks may be interleaved to speed up data transfers in a manner analogous to the speedup achieved by main memory interleaving. Conventional disks may be used for interleaving by spreading data across disks and by treating multiple disks as if they were a single one. Furthermore, the rotation of the interleaved disks may be synchronized to simplify control and also to optimize performance. In addition, check- sums may be placed on separate check-sum disks in order to improve reliability. In this paper, we study synchronized disk interleaving as a high-performance mass storage system architecture. The advantages and limitations of the proposed disk interleaving scheme are analyzed using the M/G/1 queueing model and compared to the conventional disk access mechanism.
Simultaneous Localization And Mapping With Sparse Extended Information Filters In this paper we describe a scalable algorithm for the simultaneous mapping and localization (SLAM) problem. SLAM is the problem of acquiring a map of a static environment with a mobile robot. The vast majority of SLAM algorithms are based on the extended Kahnan filter (EKF), In this paper we advocate an algorithm that relies on the dual of the EKE the extended information filter (EIF). We show that when represented in the information form, map posteriors are dominated by a small number of links that tie together nearby features in the map. This insight is developed into a sparse variant of the EIF called the sparse extended information filter (SEIF). SEIFs represent maps by graphical networks of features that are locally interconnected, where links represent relative information between pairs of nearby,features, as well as information about the robot's pose relative to the map. We show that all essential update equations in SEIFs can be executed in constant time, irrespective of the size of the map. We also provide empirical results obtained for a benchmark data set collected in an outdoor environment, and using a multi-robot mapping simulation.
A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system.
iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings.
When Multivariate Forecasting Meets Unsupervised Feature Learning - Towards a Novel Anomaly Detection Framework for Decision Support. Many organizations adopt information technologies to make intelligent decisions during operations. Time-series data plays a crucial role in supporting such decision making processes. Though current studies on time-series based decision making provide reasonably well results, the anomaly detection essence underling most of the scenarios and the plenitude of unlabeled data are largely overlooked and left unexplored. We argue that by using multivariate forecasting and unsupervised feature learning, these two important research gaps could be filled. We carried out two experiments in this study to testify our approach and the results showed that decision support performance was significantly improved. We also proposed a novel framework to integrate the two methods so that our approach may be generalized to a larger problem domain. We discussed the advantages, the limitations and the future work of our study. Both practical and theoretical contributions were also discussed in the paper. © 2012 by the AIS/ICIS Administrative Office All rights reserved.
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
1.2
0.028571
0.000259
0
0
0
0
0
0
0
0
0
0
0
Compiler-directed proactive power management for networks Increasing use of parallel computation platforms (both off-chip and on-chip) makes communication analysis and optimization an important target. While there have been numerous studies that target network performance of parallel architectures, the efforts that target network power consumption (in terms of both modeling and optimization) are relatively new. One of the common characteristics of most of the prior approaches to network power management is that they are hardware-based and reactive in the sense that they manage power consumption of the network as a response to observed message traffic. Consequently, they can miss important opportunities for saving power and can incur performance penalties due to inaccuracies in predicting future idle and active times of communication links. Motivated by this observation, this paper proposes a compiler-directed proactive approach to network power management for the class of loop-intensive applications running on small-sized networks used exclusively by a single embedded application at a time.As compared to hardware-based approaches, the proposed compiler-directed approach has two potential benefits. First, based on high-level communication analysis, it determines the points at which a given communication link is idle and can be turned off (i.e., powered down) to save power. Therefore, an idle link can be put in the low-power state without waiting for a certain period of time to make sure that the link has really become idle (as in the case of hardware schemes). Second, since the compiler can also determine the point at which a turned-off link will be needed in the future, it can pre-activate it (i.e., before it is actually needed) to eliminate the turn on (reactivation) performance penalty. Our simulations with seven array-intensive applications and an embedded on-chip network clearly show that the proposed compiler-directed approach is better than a hardware-based scheme from both power and performance perspectives.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Complexity of Planning with Partial Observability We show that for conditional planning with partial observ- ability the existence problem of plans with success proba- bility 1 is 2-EXP-complete. This result completes the com- plexity picture for non-probabilistic propositional planning. We also give new more direct and informative proofs for the EXP-hardness of conditional planning with full observability and the EXPSPACE-hardness of conditional planning with- out observability. The proofs demonstrate how lack of full observability allows the encoding of exponential space Tur- ing machines in the planning problem, and how the neces- sity to have branching in plans corresponds to the move to a complexity class defined in terms of alternation from the cor- responding deterministic complexity class. Lack of full ob- servability necessitates the use of beliefs states, the number of which is exponential in the number of states, and alternation corresponds to the choices a branching plan can make.
Mapping conformant planning into SAT through compilation and projection Conformant planning is a variation of classical AI planning where the initial state is partially known and actions can have non-deterministic effects. While a classical plan must achieve the goal from a given initial state using deterministic actions, a conformant plan must achieve the goal in the presence of uncertainty in the initial state and action effects. Conformant planning is computationally harder than classical planning, and unlike classical planning, cannot be reduced polynomially to SAT (unless P = NP). Current SAT approaches to conformant planning, such as those considered by Giunchiglia and colleagues, thus follow a generate-and-test strategy: the models of the theory are generated one by one using a SAT solver (assuming a given planning horizon), and from each such model, a candidate conformant plan is extracted and tested for validity using another SAT call. This works well when the theory has few candidate plans and models, but otherwise is too inefficient. In this paper we propose a different use of a SAT engine where conformant plans are computed by means of a single SAT call over a transformed theory. This transformed theory is obtained by projecting the original theory over the action variables. This operation, while intractable, can be done efficiently provided that the original theory is compiled into d–DNNF (Darwiche 2001), a form akin to OBDDs (Bryant 1992). The experiments that are reported show that the resulting compile-project-sat planner is competitive with state-of-the-art optimal conformant planners and improves upon a planner recently reported at ICAPS-05.
Maintainability: A Weaker Stabilizability Like Notion for High Level Control The goal of most agents is not just to reach a goal state, but rather also (or alternatively) to put restrictions on its trajec- tory, in terms of states it must avoid and goals that it must 'maintain'. This is analogous to the notions of 'safety' and 'stability' in the discrete event systems and temporal logic community. In this paper we argue that the notion of 'stability' is too strong for formulating 'maintenance' goals of an agent - in particular, reactive and software agents, and give examples of such agents. We present a weaker notion of 'maintainability' and show that our agents which do not satisfy the stability cri- teria, do satisfy the weaker criteria. We give algorithms to test maintainability, and also to generate control for maintainabil- ity. We then develop the notion of 'supportability' that gen- eralizes both 'maintainability' and 'stabilizability, develop an automata theory that distinguishes between exogenous and control actions, and develop a temporal logic based on it.
Improving Heuristics for Planning as Search in Belief Space Search in the space of beliefs has been proposed as a con- venient framework for tackling planning under uncertainty. Significant improvements have been recently achieved, espe- cially thanks to the use of symbolic model checking tech- niques such as Binary Decision Diagrams. However, the problem is extremely complex, and the heuristics available so far are unable to provide enough guidance for an informed search. In this paper we tackle the problem of defining effective heuristics for driving the search in belief space. The basic intuition is that the "degree of knowledge" associated with the belief states reached by partial plans must be explicitly taken into account when deciding the search direction. We propose a way of ranking belief states depending on their de- gree of knowledge with respect to a given set of boolean func- tions. This allows us to define a planning algorithm based on the identification and solution of suitable "knowledge sub- goals", that are used as intermediate steps during the search. The solution of knowledge subgoals is based on the identifi- cation of "knowledge acquisition conditions", i.e. subsets of the state space from where it is possible to perform knowl- edge acquisition actions. We show the effectiveness of the proposed ideas by observing substantial improvements in the conformant planning algorithms of MBP.
Generating plans in concurrent, probabilistic, over-subscribed domains Planning in realistic domains involves reasoning under uncertainty, operating under time and resource constraints, and finding the optimal set of goals to be achieved. In this paper, we provide an AO* based algorithm that can deal with durative actions, concurrent execution, over-subscribed goals, and probabilistic outcomes in a unified way. We explore plan optimization by introducing two novel aspects to the model. First, we introduce parallel steps that serve the same goal and increase the probability of success in addition to parallel steps that serve different goals and decrease execution time. Second, we introduce plan steps to terminate concurrent steps that are no longer useful so that resources can be conserved. Our algorithm called CPOAO* (Concurrent, Probabilistic, Oversubscription AO*) can deal with the aforementioned extensions and relies on the AO* framework to reduce the size of the search space using informative heuristic functions. We describe our framework, implementation, the heuristic functions we use, the experimental results, and potential research on heuristics that can further reduce the size of search space.
Fair LTL synthesis for non-deterministic systems using strong cyclic planners We consider the problem of planning in environments where the state is fully observable, actions have non-deterministic effects, and plans must generate infinite state trajectories for achieving a large class of LTL goals. More formally, we focus on the control synthesis problem under the assumption that the LTL formula to be realized can be mapped into a deterministic Büchi automaton. We show that by assuming that action nondeterminism is fair, namely that infinite executions of a nondeterministic action in the same state yield each possible successor state an infinite number of times, the (fair) synthesis problem can be reduced to a standard strong cyclic planning task over reachability goals. Since strong cyclic planners are built on top of efficient classical planners, the transformation reduces the non-deterministic, fully observable, temporally extended planning task into the solution of classical planning problems. A number of experiments are reported showing the potential benefits of this approach to synthesis in comparison with state-of-the-art symbolic methods.
Beyond NP: Arc-Consistency for Quantified Constraints The generalization of the satisfiability problem with arbitrary quantifiers is a challenging problem of both theoretical and practical relevance. Being PSPACE-complete, it provides a canonical model for solving other PSPACE tasks which naturally arise in AI.Effective SAT-based solvers have been designed very recently for the special case of boolean constraints. We propose to consider the more general problem where constraints are arbitrary relations over finite domains. Adopting the viewpoint of constraint-propagation techniques so successful for CSPs, we provide a theoretical study of this problem. Our main result is to propose quantified arc-consistency as a natural extension of the classical CSP notion.
Pruning Conformant Plans by Counting Models on Compiled d-DNNF Representations Optimal planners in the classical setting are built around two notions: branching and pruning. SAT-based planners for ex- ample branch by trying the values of a selected variable, and prune by propagating constraints and checking consistency. In the conformant setting, a similar branching scheme can be used if restricted to action variables, but the pruning scheme must be modified. Indeed, pruning branches that encode in- consistent partial plans is not sufficient since a partial plan may be consistent and complete (covering all the action vari- ables) and still fail to be a conformant plan. This happens indeed when the plan does not conform to some possible ini- tial state or transition. A remedy to this problem is to use a criterion stronger than consistency for pruning. This is actu- ally what we do in this paper where the consistency-based pruning criterion used in classical planning is replaced by a validity-based criterion suitable for conformant planning. Under the assumption that actions are deterministic, a partial plan can be defined as valid when it is logically consistent with the theory and each possible initial state. A valid partial plan that is complete is guaranteed to encode a conformant plan, and vice versa. Checking validity, however, while use- ful for pruning can be very expensive. We show then that such validity checks can be performed in linear time pro- vided that the theory encoding the problem is transformed into a logically equivalent theory in deterministic decompos- able negation normal form (d-DNNF). In d-DNNF, plan va- lidity checks can be reduced to two linear-time operations: projection (finding the strongest consequence of a formula over some of its variables) and model counting (finding the number of satisfying assignments). We then define and eval- uate a conformant planner that branches on action variables, and prunes invalid partial plans in linear time. The empiri- cal results are encouraging, showing the potential benefits of stronger forms of inference in planning tasks that are not re- ducible to SAT.
Compilation of a High-level Temporal Planning Language into PDDL 2.1 An important aspect of any automatic planner is the language in which the user expresses problem instances. A rich language is an advantage for the user, whereas a simple language is an advantage for the programmer who must write a program to solve all planning problems expressible in the language. Considering the temporal planning language PDDL 2.1 as a low-level language, we show how to automatically compile a much richer language into PDDL 2.1. The worst-case complexity of this transformation is quadratic. Our high-level language allows the user to declare time-points and impose simple temporal constraints between them. Conditions and effects can be imposed at time-points, over intervals and over sliding intervals within fixed intervals. Non-instantaneous transitions can also be modelled.
Intention reconsideration in complex environments One of the key problems in the design of belief-desire-intention (bdi) agents is that of nding an appropriate policy for intention reconsideration. In previous work, Kinny and George investigated the eectiveness of several such reconsideration policies, and demonstrated that in general, there is no one best approach { dierent environments demand dierent intention reconsideration strategies. In this paper, we further investigate the relationship between the eectiveness of an agent and its...
Recognizing when greed can approximate maximum independent sets is complete for parallel access to NP Bodlaender, Thilikos, and Yamazaki (1997) investigate the computational complexity of the problem of whether the Minimum Degree Greedy Algorithm can approximate a maximum independent set of a graph within a constant factor of r, for fixed rational r greater than or equal to 1. They denote this problem by S-r and prove that for each rational r greater than or equal to 1, S-r is coNP-hard. They also provide a P-NP upper bound of S-r, leaving open the question of whether this gap between the upper and the lower bound of S-r can be closed. For the special case of r = 1, they show that S-1 is even DP-hard, again leaving open the question of whether S-1 can be shown to be complete for DP or some larger class such as P-NP. In this note, we completely solve all the questions left open by Bodlaender et al. Our main result is that for each rational r greater than or equal to 1, S-r is complete for p(parallel to)(NP), the class of sets solvable via parallel access to NP. (C) 1998 Elsevier Science B.V.
PatternHunter II: highly sensitive and fast homology search. Extending the single optimized spaced seed of PatternHunter to multiple ones, PatternHunter II simultaneously remedies the lack of sensitivity of Blastn and the lack of speed of Smith-Waterman, for homology search. At Blastn speed, PatternHunter II approaches Smith-Waterman sensitivity, bringing homology search technology back to a full circle.
Hypothesizing about signaling networks The current knowledge about signaling networks is largely incomplete. Thus biologists constantly need to revise or extend existing knowledge. The revision and/or extension is first formulated as theoretical hypotheses, then verified experimentally. Many computer-aided systems have been developed to assist biologists in undertaking this challenge. The majority of the systems help in finding “patterns” in data and leave the reasoning to biologists. A few systems have tried to automate the reasoning process of hypothesis formation. These systems generate hypotheses from a knowledge base and given observations. A main drawback of these knowledge-based systems is the knowledge representation formalism they use. These formalisms are mostly monotonic and are now known to be not quite suitable for knowledge representation, especially in dealing with the inherently incomplete knowledge about signaling networks. We propose an action language based framework for hypothesis formation for signaling networks. We show that the hypothesis formation problem can be translated into an abduction problem. This translation facilitates the complexity analysis and an efficient implementation of our system. We illustrate the applicability of our system with an example of hypothesis formation in the signaling network of the p53 protein.
Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture.
1.00936
0.014077
0.010526
0.008263
0.006568
0.005263
0.003009
0.001448
0.000212
0.000031
0
0
0
0
A Multi Language Environment to Develop Multi Agent Applications HOMAGE is an environment for the development of multi agent systems integrating agent and object-oriented programming paradigms and offering two different programming levels: object and agent. The object level allows the use of three object-oriented programming languages (C++, Common Lisp and Java) to develop new agent models as well as the components that will be used to build the body of agents. The agent level allows the development of new agents by defining their brain and by composing the components defined at the object level, and allows the development of multi agent systems by distributing and interconnecting agent instances. Moreover, these multi agent systems can be distributed on a net of heterogeneous machines connected through internet taking advantage of a set of communication and distribution libraries allowing the communication between agents through different protocols. The paper includes a brief description of a robotic application that we implemented during the experimentation of the environment.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Planning for temporally extended goals In planning, goals have traditionally been viewed as specifying a set of desirable final states. Any plan that transforms the current state to one of these desirable states is viewed to be correct. Goals of this form are limited in what they can specify, and they also do not allow us to constrain the manner in which the plan achieves its objectives. We propose viewing goals as specifying desirable sequences of states, and a plan to be correct if its execution yields one of these desirable sequences. We present a logical language, a temporal logic, for specifying goals with this semantics. Our language is rich and allows the representation of a range of temporally extended goals, including classical goals, goals with temporal deadlines, quantified goals (with both universal and existential quantification), safety goals, and maintenance goals. Our formalism is simple and yet extends previous approaches in this area. We also present a planning algorithm that can generate correct plans for these goals. This algorithm has been implemented, and we provide some examples of the formalism at work. The end result is a planning system which can generate plans that satisfy a novel and useful set of conditions.
On methodology of representing knowledge in dynamic domains The main goal of this paper is to outline a methodology of programming in dynamic problem domains. The methodology is based on recent developments in theories of reasoning about action and change and in logic programming. The basic ideas of the approach are illustrated by discussion of the design of a program which verifies plans to control the reaction control system (RCS) of the Space Shuttle. We start with formalization of the RCS domain in an action description language. The resulting formalization ARCS together with a candidate plan α and a goal G are given as an input to a logic program. This program verifies if G would be true after executing α in the current situation. A high degree of trust in the program's correctness was achieved by (a) the simplicity and transparency of our formalization, ARCS, which made it possible for the users to informally verify its correctness; (b) a proof of correctness of the program with respect to ARCS. This is an ongoing work under a contract with the United Space Alliance—the company primarily responsible for operating the Space Shuttle.
Maintainability: A Weaker Stabilizability Like Notion for High Level Control The goal of most agents is not just to reach a goal state, but rather also (or alternatively) to put restrictions on its trajec- tory, in terms of states it must avoid and goals that it must 'maintain'. This is analogous to the notions of 'safety' and 'stability' in the discrete event systems and temporal logic community. In this paper we argue that the notion of 'stability' is too strong for formulating 'maintenance' goals of an agent - in particular, reactive and software agents, and give examples of such agents. We present a weaker notion of 'maintainability' and show that our agents which do not satisfy the stability cri- teria, do satisfy the weaker criteria. We give algorithms to test maintainability, and also to generate control for maintainabil- ity. We then develop the notion of 'supportability' that gen- eralizes both 'maintainability' and 'stabilizability, develop an automata theory that distinguishes between exogenous and control actions, and develop a temporal logic based on it.
Planning with Extended Goals and Partial Observability Planning in nondeterministic domains with temporally ex- tended goals under partial observability is one of the most challenging problems in planning. Simpler subsets of this problem have been already addressed in the literature, but the general combination of extended goals and partial observabil- ity is, to the best of our knowledge, still an open problem. In this paper we present a first attempt to solve the problem, namely, we define an algorithm that builds plans in the gen- eral setting of planning with extended goals and partial ob- servability. The algorithm builds on the top of the techniques developed in the planning with model checking framework for the restricted problems of extended goals and of partial observability.
Computational complexity of planning with temporal goals We consider the problem of how an agent creates a discrete spatial representation from its continuous interactions with the environment. Such representation will be the minimal one that explains the experiences of the agent in the environment. In this ...
From theory to practice: the UTEP robot in the AAAI 96 and AAAI 97 robot contests In this paper we describe the control aspects of Diablo, theUTEP mobile robot participant in two AAAI robot competitions.In the first competition, event one of the AAAI 96robot contest, Diablo consistently scored 2851out of a totalof 295 points. In the second competition, our robot wonthe first place in the event &quot;Tidy Up&quot; of the home vacuumcontest. The main goal in this paper will be to show howthe agent theories - based on action theories -- developed atUTEP and by Saffiotti et...
Signed logic programs In this paper we explore the notion of a &quot;signing&quot; of a logic program, in the frameworkof the answer set semantics. In particular, we generalize and extend the notionof a signing, and show that even for programs with classical negation and disjunctionthe existence of a signing is a simple syntactic criterion that can guaranteeseveral different sorts of good behavior: consistency, coincidence of consequencesunder answer set and well-founded semantics, existence of &quot;standard&quot; answer sets...
Formalizing (and Reasoning About) the Specifications of Workflows . We address the problem of workflow requirements specifications under realistic assumptions that it involves experts from differentdomains (different business policies), where not all the possible executionscenarios are known beforehand. Using recent results on reasoningabout actions, we formalize the notion of the specifications" correctness.To address this, we propose a high level language AW as a basis of ourprototype tool for process specification. We go &quot;one step&quot; before...
Games Against Nature (Extended Abstract)
The metric-FF planning system: translating "Ignoring delete lists" to numeric state variables Planning with numeric state variables has been a challenge for many years, and was a part of the 3rd International Planning Competition (IPC-3). Currently one of the most popular and successful algorithmic techniques in STRIPS planning is to guide search by a heuristic function, where the heuristic is based on relaxing the planning task by ignoring the delete lists of the available actions. We present a natural extension of "ignoring delete lists" to numeric state variables, preserving the relevant theoretical properties of the STRIPS relaxation under the condition that the numeric task at hand is "monotonic". We then identify a subset of the numeric IPC-3 competition language, "linear tasks", where monotonicity can be achieved by pre-processing. Based on that, we extend the algorithms used in the heuristic planning system FF to linear tasks. The resulting system Metric-FF is, according to the IPC-3 results which we discuss, one of the two currently most efficient numeric planners.
Phase Transitions in Classical Planning: An Experimental Study. Phase transitions in the solubility of problem instances are known in many types of computational problems relevant for artificial intelligence, most notably for the satisfiability problem of the classical propositional logic. However, phase transitions in classical planning have received far less attention. Bylander has investigated phase transitions theoretically as well as experimentally by using simplified planning algorithms, and shown that most of the soluble problems can be solved by a naïve hill-climbing algorithm. Because of the simplicity of his algorithms he did not investigate hard problems on the phase transition region. In this paper, we address exactly this problem. We introduce two new models of problem instances, one eliminating the most trivially insoluble instances from Bylander’s model, and the other restricting the class of problem instances further. Then we perform experiments on the behavior of different types of planning algorithms on hard problems from the phase transition region, showing that a planner based on general-purpose satisfiability algorithms outperforms two planners based on heuristic local search.
Using dynamic sets to overcome high I/O latencies during search Describes a single unifying abstraction called 'dynamic sets', which can offer substantial benefits to search applications. These benefits include greater opportunity in the I/O subsystem to aggressively exploit prefetching and parallelism, as well as support for associative naming to complement the hierarchical naming in typical file systems. This paper motivates dynamic sets and presents the design of a system that embodies this abstraction.
Inductive Properties of States In the situation calculus states are often distinguished fr om situations by the assumption that situations are paths in a rooted tree while a state is a pa rticular truth assignment to the fluents. It is then possible that two situations have end poin ts that agree on all fluents, i.e., are the same state, and yet be distinct from the perspective o f situations. This has the merit of making inductive proofs simple as it introduces two axioms amounting to enforcing the rooted tree structure that are used as trivial bases for t he inductions. In this paper we show that the tree structure is dispensable for induction wh en the underlying system is deterministic, thus elevating the state perspective to equ al status.
Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture.
1.015558
0.027253
0.026667
0.013335
0.008921
0.004797
0.001834
0.000805
0.000174
0.000017
0
0
0
0
Embedded Solutions for Deep Neural Networks Implementation Deep Neural Networks and its associate learning paradigm-Deep Learning-represents today a breakthrough in the field of Artificial Intelligence due to the impressive results obtained in many application areas, especially in image, video or speech processing. The main hindrance to the development process of such applications is represented by the vast amount of computational power needed to train such structures. Various hardware solutions arose to this problem, most of them relying on the intrinsic parallelism found in modern Graphical Processing Units. On the other hand, once the learning process was finished, the functional phase (inference) of the neural network require substantially less hardware resources enabling thus potential realtime solutions. Our work provides an extensive overview regarding currently available embedded solutions for Deep Neural Networks implementation, pointing out the main characteristics, advantages and disadvantages. We also demonstrate through experimental results that the effect of combined hardware optimization and suitable deep architecture could substantially decrease the inference process execution time.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Training with Confusion for Fine-Grained Visual Classification. Research in Fine-Grained Visual Classification has focused on tackling the variations in pose, lighting, and viewpoint using sophisticated localization and segmentation techniques, and the usage of robust texture features to improve performance. In this work, we look at the fundamental optimization of neural network training for fine-grained classification tasks with minimal inter-class variance, and attempt to learn features with increased generalization to prevent overfitting. We introduce Training-with-Confusion, an optimization procedure for fine-grained classification tasks that regularizes training by introducing confusion in activations. Our method can be generalized to any fine-tuning task; it is robust to the presence of small training sets and label noise; and adds no overhead to the prediction time. We find that Training-with-Confusion improves the state-of-the-art on all major fine-grained classification datasets.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Greedy Deep Dictionary Learning. In this work we propose a new deep learning tool called deep dictionary learning. Multi-level dictionaries are learnt in a greedy fashion, one layer at a time. This requires solving a simple (shallow) dictionary learning problem, the solution to this is well known. We apply the proposed technique on some benchmark deep learning datasets. We compare our results with other deep learning tools like stacked autoencoder and deep belief network; and state of the art supervised dictionary learning tools like discriminative KSVD and label consistent KSVD. Our method yields better results than all.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Visual Object Categorization using Distance-Based Discriminant Analysis This paper formulates the problem of object categorization in the discriminant analysis framework focusing on transforming visual feature data so as to make it conform to the compactness hypothesis in order to improve categorization accuracy. The sought transformation, in turn, is found as a solution to an optimization problem formulated in terms of inter-observation distances only, using the technique of iterative majorization. The proposed approach is suitable for both binary and multiple-class categorization problems, and can be applied as a dimensionality reduction technique. In the latter case, the number of discriminative features is determined automatically since the process of feature extraction is fully embedded in the optimization procedure. Performance tests validate our method on a number of benchmark data sets from the UCI repository, while the experiments in the application of visual object and content-based image categorization demonstrate very competitive results, asserting the method's capability of producing semantically relevant matches that share the same or synonymous vocabulary with the query category and allowing multiple pertinent category assignment.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Exploiting In-Memory and On-Disk Redundancy to Conserve Energy in Storage Systems Today's storage system places an imperative demand on energy efficiency. Storage system often places disks into standby mode by stopping them from spinning to conserve energy when load is not high. The major obstacle of this method is by introducing a high spin-up cost introduced by passively waking up the standby disk to service the request. In this paper, we propose a redundancy-based, hierarchical I/O cache architecture called RIMAC to solve the problem. The idea of RIMAC is to enable data on the standby disk(s) to be recovered by accessing two-level I/O cache and/or active disks if needed. In parity-based redundant disk arrays, RIMAC exploits parity redundancy to dynamically XOR-reconstruct data being accessed toward standby disk(s) at both cache and disk levels. By avoiding passive spin-ups, RIMAC can significantly improve both energy efficiency and performance. We evaluated RIMAC by augmenting a validated storage system simulator disksim and tested four real-life server traces including HP's cello99, TPC-D, OLTP and SPC's search engine. Comprehensive results indicate RIMAC is able to reduce energy consumption by up to 18% and simultaneously improve the average response time by up to 34% in a small-scale RAID-5 system compared with threshold-based power management schemes.
Reducing Disk Power Consumption in Servers with DRPM Although effective techniques exist for tackling disk power for laptops and workstations, applying them in a server environment presents a considerable challenge, especially under stringent performance requirements.The dynamic rotations per minute technique dynamically modulates the hard-disk rotation speed so that the disk can service requests at different RPMs, providing large savings in power consumption with little perturbation in delivered performance.
Highly available and heterogeneous continuous media storage systems A number of recent technological trends have made data intensive applications such as continuous media (audio and video) servers a reality. These servers store and retrieve large volumes of data using magnetic disks. Servers consisting of multiple nodes and large arrays of heterogeneous disk drives have become a fact of life for several reasons. First, magnetic disks might fail. Failed disks are almost always replaced with newer disk models because the current technological trend for these devices is one of annual increase in both performance and storage capacity. Second, storage requirements are ever increasing, forcing servers to be scaled up progressively. In this study, we present a framework to enable parity-based data protection for heterogeneous storage systems and to compute their mean lifetime. We describe the tradeoffs associated with three alternative techniques: independent subservers, dependent subservers, and disk merging. The disk merging approach provides a solution for systems that require highly available secondary storage in environments that also necessitate maximum flexibility.
Predictive data grouping: Defining the bounds of energy and latency reduction through predictive data grouping and replication We demonstrate that predictive grouping is an effective mechanism for reducing disk arm movement, thereby simultaneously reducing energy consumption and data access latency. We further demonstrate that predictive grouping has untapped dramatic potential to further improve access performance and limit energy consumption. Data retrieval latencies are considered a major bottleneck, and with growing volumes of data and increased storage needs it is only growing in significance. Data storage infrastructure is therefore a growing consumer of energy at data-center scales, while the individual disk is already a significant concern for mobile computing (accounting for almost a third of a mobile system's energy demands). While improving responsiveness of storage subsystems and hence reducing latencies in data retrieval is often considered contradictory with efforts to reduce disk energy consumption, we demonstrate that predictive data grouping has the potential to simultaneously work towards both these goals. Predictive data grouping has advantages in its applicability compared to both prior approaches to reducing latencies and to reducing energy usage. For latencies, grouping can be performed opportunistically, thereby avoiding the serious performance penalties that can be incurred with prior applications of access prediction (such as predictive prefetching of data). For energy, we show how predictive grouping can even save energy use for an individual disk that is never idle. Predictive data grouping with effective replication results in a reduction of the overall mechanical movement required to retrieve data. We have built upon our detailed measurements of disk power consumption, and have estimated both the energy expended by a hard disk for its mechanical components, and that needed to move the disk arm. We have further compared, via simulation, three models of predictive grouping of on-disk data, including an optimal arrangement of data that is guaranteed to minimize disk arm movement. These experiments have allowed us to measure the limits of performance improvement achievable with optimal data grouping and replication strategies on a single device, and have further allowed us to demonstrate the potential of such schemes to reduce energy consumption of mechanical components by up to 70&percnt;.
PARAID: a gear-shifting power-aware RAID Reducing power consumption for server computers is important, since increased energy usage causes increased heat dissipation, greater cooling requirements, reduced computational density, and higher operating costs. For a typical data center, storage accounts for 27% of energy consumption. Conventional server-class RAIDs cannot easily reduce power because loads are balanced to use all disks even for light loads. We have built the Power-Aware RAID (PARAID), which reduces energy use of commodity server-class disks without specialized hardware. PARAID uses a skewed striping pattern to adapt to the system load by varying the number of powered disks. By spinning disks down during light loads, PARAID can reduce power consumption, while still meeting performance demands, by matching the number of powered disks to the system load. Reliability is achieved by limiting disk power cycles and using different RAID encoding schemes. Based on our five-disk prototype, PARAID uses up to 34% less power than conventional RAIDs, while achieving similar performance and reliability.
An optimality proof of the LRU-K page replacement algorithm This paper analyzes a recently published algorithm for page replacement in hierarchical paged memory systems [O'Neil et al. 1993]. The algorithm is called the LRU-K method, and reduces to the well-known LRU (Least Recently Used) method for K = 1. Previous work [O'Neil et al. 1993; Weikum et al. 1994; Johnson and Shasha 1994] has shown the effectiveness for K > 1 by simulation, especially in the most common case of K = 2. The basic idea in LRU-K is to keep track of the times of the last K references to memory pages, and to use this statistical information to rank-order the pages as to their expected future behavior. Based on this the page replacement policy decision is made: which memory-resident page to replace when a newly accessed page must be read into memory. In the current paper, we prove, under the assumptions of the independent reference model, that LRU-K is optimal. Specifically we show: given the times of the (up to) K most recent references to each disk page, no other algorithm A making decisions to keep pages in a memory buffer holding n - 1 pages based on this infomation can improve on the expected number of I/Os to access pages over the LRU-K algorithm using a memory buffer holding n pages. The proof uses the Bayesian formula to relate the space of actual page probabilities of the model to the space of observable page numbers on which the replacement decision is acutally made.
Prefetching with adaptive cache culling for striped disk arrays Conventional prefetching schemes regard prediction accuracy as important because useless data prefetched by a faulty prediction may pollute the cache. If prefetching requires considerably low read cost but the prediction is not accurate, it may or may not be beneficial depending on the situation. However, the problem of low prediction accuracy can be dramatically reduced if we efficiently manage prefetched data by considering the total hit rate for both prefetched data and cached data. To achieve this goal, we propose an adaptive strip prefetching (ASP) scheme, which provides low prefetching cost and evicts prefetched data at the proper time by using differential feedback that maximizes the hit rate of both prefetched data and cached data in a given cache management scheme. Additionally, ASP controls prefetching by using an online disk simulation that investigates whether prefetching is beneficial for the current workloads and stops prefetching if it is not. Finally, ASP provides methods that resolve both independency loss and parallelism loss that may arise in striped disk arrays. We implemented a kernel module in Linux version 2.6.18 as a RAID-5 driver with our scheme, which significantly outperforms the sequential prefetching of Linux from several times to an order of magnitude in a variety of realistic workloads.
Towards application/file-level characterization of block references: a case for fine-grained buffer management Two contributions are made in this paper. First, we show that system level characterization of file block references is inadequate for maximizing buffer cache performance. We show that a finer-grained characterization approach is needed. Though application level characterization methods have been proposed, this is the first attempt, to the best of our knowledge, to consider file level characterizations. We propose an Application/File-level Characterization (AFC) scheme where we detect on-line the reference characteristics at the application level and then at the file level, if necessary. The results of this characterization are used to employ appropriate replacement policies in the buffer cache to maximize performance. The second contribution is in proposing an efficient and fair buffer allocation scheme. Application or file level resource management is infeasible unless there exists an allocation scheme that is efficient and fair. We propose the &Dgr;HIT allocation scheme that takes away a block from the application/file where the removal results in the smallest reduction in the number of expected buffer cache hits. Both the AFC and &Dgr;HIT schemes are on-line schemes that detect and allocate as applications execute. Experiments using trace-driven simulations show that substantial performance improvements can be made. For single application executions the hit ratio increased an average of 13 percentage points compared to the LRU policy, with a maximum increase of 59 percentage points, while for multiple application executions, the increase is an average of 12 percentage points, with a maximum of 32 percentage points for the workloads considered.
Parity logging overcoming the small write problem in redundant disk arrays Parity encoded redundant disk arrays provide highly reliable, cost effective secondary storage with high performance for read accesses and large write accesses. Their performance on small writes, however, is much worse than mirrored disks—the traditional, highly reliable, but expensive organization for secondary storage. Unfortunately, small writes are a substantial portion of the I/O workload of many important, demanding applications such as on-line transaction processing. This paper presents parity logging, a novel solution to the small write problem for redundant disk arrays. Parity logging applies journalling techniques to substantially reduce the cost of small writes. We provide a detailed analysis of parity logging and competing schemes—mirroring, floating storage, and RAID level 5— and verify these models by simulation. Parity logging provides performance competitive with mirroring, the best of the alternative single failure tolerating disk array organizations. However, its overhead cost is close to the minimum offered by RAID level 5. Finally, parity logging can exploit data caching much more effectively than all three alternative approaches.
Pushing the envelope: planning, propositional logic, and stochastic search Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning problems many times faster than the best current planning systems. Although stochastic methods have been shown to be very effective on a wide range of scheduling problems, this is the first demonstration of its power on truly challenging classical planning instances. This work also provides a new perspective on representational issues in planning.
Why does file system prefetching work? Most file systems attempt to predict which disk blocks will be needed in the near future and prefetch them into memory; this technique can improve application throughput as much as 50%. But why? The reasons include that the disk cache comes into play, the device driver amortizes the fixed cost of an I/O operation over a larger amount of data, total disk seek time can be decreased, and that programs can overlap computation and I/O. However, intuition does not tell us the relative benefit of each of these causes, or techniques for increasing the effectiveness of prefetching. To answer these questions, we constructed an analytic performance model for file system reads. The model is based on a 4.4BSD-derived file system, and parameterized by the access patterns of the files, layout of files on disk, and the design characteristics of the file system and of the underlying disk. We then validated the model against several simple workloads; the predictions of our model were typically within 4% of measured values, and differed at most by 9% from measured values. Using the model and experiments, we explain why and when prefetching works, and make proposals for how to tune file system and disk parameters to improve overall system throughput.
Modelling and Generation of Graphical User Interfaces in the TADEUS Approach
Dma-based prefetching for i/o-intensive workloads on the cell architecture Recent advent of the asymmetric multi-core processors such as Cell Broadband Engine (Cell/BE) has popularized the use of heterogeneous architectures. A growing body of research is exploring the use of such architectures, especially in High-End Computing, for supporting scientific applications. However, prior research has focused on use of the available Cell/BE operating systems and runtime environments for supporting compute-intensive jobs. Data and I/O intensive workloads have largely been ignored in this domain. In this paper, we take the first steps in supporting I/O intensive workloads on the Cell/BE and deriving guidelines for optimizing the execution of I/O workloads on heterogeneous architectures. We explore various performance enhancing techniques for such workloads on an actual Cell/BE system. Among the techniques we explore, an asynchronous prefetching-based approach, which uses the PowerPC core of the Cell/BE for file prefetching and decentralized DMAs from the synergistic processing cores (SPE's), improves the performance for I/O workloads that include an encryption/decryption component by 22.2%, compared to I/O performed naïvely from the SPE's. Our evaluation shows promising results and lays the foundation for developing more efficient I/O support libraries for multi-core asymmetric architectures.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.24
0.24
0.12
0.04
0.010909
0.001635
0.000625
0.000238
0.000003
0
0
0
0
0
I/O performance of fully-replicated disk systems Mirrored disk storage is an accepted technique to enhance fault-tolerance of data through complete replication. Recent research suggested that in addition to higher reliability, mirrored disks can offer better I/O performance by either allowing parallel reads or by reducing the seek time in cases where shortest seek distance algorithms can be used. Accurate analysis of mirrored disk systems shows that there is a significant correlation between the distributions of the disk heads positions, which makes them both interdependent and non-uniform. Furthermore, after each write all disk heads move to the same position to form what the authors call a `bundle'. Such phenomenon may seriously deteriorate system performance. Subsequent reads gradually break up the bundles. Finally, under low rates of request arrivals, the authors provide analytical expressions for optimal anticipation points for two and three mirrored disks under general read/write combinations. Their work makes the following contributions: (1) new heuristics for treating the `bundling phenomenon' adaptively, for applications with unknown read/write ratios; (2) a new technique that improves the expected seek time and workload balancing by using `shifted mirroring'
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An Efficient Page Lock/Release OS Mechanism for Out-of-Core Embedded Applications Embedded system applications are becoming more complex, requiring increased memory. However, additional physical memory increases system cost and power consumption. Virtual memory techniques such as paging, can make use of low-power auxiliary memory, allowing applications increased memory for execution. Currently paging yields poor performance due to page swapping overheads. This paper presents a combined approach of using application hints along with an efficient page lock/release mechanism in the OS to reduce paging overheads. This makes paging a viable solution to support out-of-core embedded real-time applications. The Co-operative Application Specific Paging (CASP) mechanism presented works in conjunction with most existing page replacement policies, providing explicit support for applications via insertion of paging hints in the application source code. Both automatic and manual methods of inserting hints are described and evaluated. The benchmark results of a CASP implementation in the Linux 2.6.16 kernel have shown significant reduction in the number of page-faults (22.3%) and a considerable improvement in application execution times (12.5%).
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Unsupervised Lexical Simplification for Non-Native Speakers. Lexical Simplification is the task of replacing complex words with simpler alternatives. We propose a novel, unsupervised approach for the task. It relies on two resources: a corpus of subtitles and a new type of word embeddings model that accounts for the ambiguity of words. We compare the performance of our approach and many others over a new evaluation dataset, which accounts for the simplification needs of 400 non-native English speakers. The experiments show that our approach outperforms state-of-the-art work in Lexical Simplification.
Out in the Open: Finding and Categorising Errors in the Lexical Simplification Pipeline. Lexical simplification is the task of automatically reducing the complexity of a text by identifying difficult words and replacing them with simpler alternatives. Whilst this is a valuable application of natural language generation, rudimentary lexical simplification systems suffer from a high error rate which often results in nonsensical, non-simple text. This paper seeks to characterise and quantify the errors which occur in a typical baseline lexical simplification system. We expose 6 distinct categories of error and propose a classification scheme for these. We also quantify these errors for a moderate size corpus, showing the magnitude of each error type. We find that for 183 identified simplification instances, only 19 (10.38%) result in a valid simplification, with the rest causing errors of varying gravity.
Simplifying Lexical Simplification: Do We Need Simplified Corpora? Simplification of lexically complex texts, by replacing complex words with their simpler synonyms, helps non-native speakers, children, and language-impaired people understand text better. Recent lexical simplification methods rely on manually simplified corpora, which are expensive and time-consuming to build. We present an unsupervised approach to lexical simplification that makes use of the most recent word vector representations and requires only regular corpora. Results of both automated and human evaluation show that our simple method is as effective as systems that rely on simplified corpora.
Making It Simplext: Implementation and Evaluation of a Text Simplification System for Spanish The way in which a text is written can be a barrier for many people. Automatic text simplification is a natural language processing technology that, when mature, could be used to produce texts that are adapted to the specific needs of particular users. Most research in the area of automatic text simplification has dealt with the English language. In this article, we present results from the Simplext project, which is dedicated to automatic text simplification for Spanish. We present a modular system with dedicated procedures for syntactic and lexical simplification that are grounded on the analysis of a corpus manually simplified for people with special needs. We carried out an automatic evaluation of the system’s output, taking into account the interaction between three different modules dedicated to different simplification aspects. One evaluation is based on readability metrics for Spanish and shows that the system is able to reduce the lexical and syntactic complexity of the texts. We also show, by means of a human evaluation, that sentence meaning is preserved in most cases. Our results, even if our work represents the first automatic text simplification system for Spanish that addresses different linguistic aspects, are comparable to the state of the art in English Automatic Text Simplification.
Putting it simply: a context-aware approach to lexical simplification We present a method for lexical simplification. Simplification rules are learned from a comparable corpus, and the rules are applied in a context-aware fashion to input sentences. Our method is unsupervised. Furthermore, it does not require any alignment or correspondence among the complex and simple corpora. We evaluate the simplification according to three criteria: preservation of grammaticality, preservation of meaning, and degree of simplification. Results show that our method outperforms an established simplification baseline for both meaning preservation and simplification, while maintaining a high level of grammaticality.
The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz.
Affinity analysis of coded data sets Coded data sets are commonly used as compact representations of real world processes. Such data sets have been studied within various research fields from association mining, data warehousing, knowledge discovery, collaborative filtering to machine learning. However, previous studies on coded data sets have introduced methods for the analysis of rather small data sets. This study proposes applying information retrieval for enabling high performance analysis of data masses that scale beyond traditional approaches. Part of this PHD study focuses on new type of kernel projection functions that can be used to find similarities in spare discrete data spaces. This study presents experimental results how information retrieval indexes scale and outperform two common relational data schemas with a leading commercial DBMS for market basket analysis.
Internet of Things (IoT): A vision, architectural elements, and future directions Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
Unsupervised Learning of Multiple Motifs in Biopolymers Using Expectation Maximization The MEME algorithm extends the expectation maximization (EM) algorithm for identifying motifs in unaligned biopolymer sequences. The aim of MEME is to discover new motifs in a set of biopolymer sequences where little or nothing is known in advance about any motifs that may be present. MEME innovations expand the range of problems which can be solved using EM and increase the chance of finding good solutions. First, subsequences which actually occur in the biopolymer sequences are used as starting points for the EM algorithm to increase the probability of finding globally optimal motifs. Second, the assumption that each sequence contains exactly one occurrence of the shared motif is removed. This allows multiple appearances of a motif to occur in any sequence and permits the algorithm to ignore sequences with no appearance of the shared motif, increasing its resistance to noisy data. Third, a method for probabilistically erasing shared motifs after they are found is incorporated so that several distinct motifs can be found in the same set of sequences, both when different motifs appear in different sequences and when a single sequence may contain multiple motifs. Experiments show that MEME can discover both the CRP and LexA binding sites from a set of sequences which contain one or both sites, and that MEME can discover both the −10 and −35 promoter regions in a set of E. coli sequences.
Distributed operating systems Distributed operating systems have many aspects in common with centralized ones, but they also differ in certain ways. This paper is intended as an introduction to distributed operating systems, and especially to current university research about them. After a discussion of what constitutes a distributed operating system and how it is distinguished from a computer network, various key design issues are discussed. Then several examples of current research projects are examined in some detail, namely, the Cambridge Distributed Computing System, Amoeba, V, and Eden.
Comparative Evaluation of Latency Tolerance Techniques for Software Distributed Shared Memory A key challenge in achieving high performance on software DSMs is overcoming their relatively large communication latencies. In this paper, we consider two techniques which address this problem: prefetching and multithreading. While previous studies have examined each of these techniques in isolation, this paper is the first to evaluate both techniques using a consistent hardware platform and set of applications, thereby allowing direct comparisons. In addition, this is the first study to consider combining prefetching and multithreading in a software DSM. We performed our experiments on real hardware using a full implementation of both techniques. Our experimental results demonstrate that both prefetching and multithreading result in significant performance improvements when applied individually. In addition, we observe that prefetching and multithreading can potentially complement each other by using prefetching to hide memory latency and multithreading to hide synchronization latency.
Phoenix: a safe in-memory file system Phoenix contains two timestamped versions of the in-memory file system allowing for a reserve version that ensures safety for diskless computers with battery-powered memeory.
Small cache, big effect: provable load balancing for randomly partitioned cluster services Load balancing requests across a cluster of back-end servers is critical for avoiding performance bottlenecks and meeting service-level objectives (SLOs) in large-scale cloud computing services. This paper shows how a small, fast popularity-based front-end cache can ensure load balancing for an important class of such services; furthermore, we prove an O(n log n) lower-bound on the necessary cache size and show that this size depends only on the total number of back-end nodes n, not the number of items stored in the system. We validate our analysis through simulation and empirical results running a key-value storage system on an 85-node cluster.
Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture.
1.066667
0.033333
0.028571
0.022222
0.008333
0
0
0
0
0
0
0
0
0
IBM Intelligent Bricks project—Petabytes and beyond This paper provides an overview of the Intelligent Bricks project in progress at IBM Research. It describes common problems faced by data center operators and proposes a comprehensive solution based on brick architectures. Bricks are hardware building blocks. Because of certain properties, defined here, scalable and reliable systems can be built with collections of identical bricks. An important feature is that brick-based systems must survive the failure of any brick without requiring human intervention, as long as most bricks are operational. This simplifies system management and allows very dense and very scalable systems to be built. A prototype storage server in the form of a 3 × 3 × 3 array of bricks, capable of storing 26 TB, is operational at the IBM Almaden Research Center. It successfully demonstrates the concepts of the Intelligent Bricks architecture. The paper describes this implementation of brick architectures based on newly developed communication and cooling technologies, the software developed, and techniques for building very reliable systems from low-cost bricks, and it discusses the performance and the future of intelligent brick systems.
Ursa minor: versatile cluster-based storage No single encoding scheme or fault model is optimal for all data. A versatile storage system allows them to be matched to access patterns, reliability requirements, and cost goals on a per-data item basis. Ursa Minor is a cluster-based storage system that allows data-specific selection of, and on-line changes to, encoding schemes and fault models. Thus, different data types can share a scalable storage infrastructure and still enjoy specialized choices, rather than suffering from "one size fits all." Experiments with Ursa Minor show performance benefits of 2-3× when using specialized choices as opposed to a single, more general, configuration. Experiments also show that a single cluster supporting multiple workloads simultaneously is much more efficient when the choices are specialized for each distribution rather than forced to use a "one size fits all" configuration. When using the specialized distributions, aggregate cluster throughput nearly doubled.
Dynamic partitioning of the cache hierarchy in shared data centers Due to the imperative need to reduce the management costs of large data centers, operators multiplex several concurrent database applications on a server farm connected to shared network attached storage. Determining and enforcing per-application resource quotas in the resulting cache hierarchy, on the fly, poses a complex resource allocation problem spanning the database server and the storage server tiers. This problem is further complicated by the need to provide strict Quality of Service (QoS) guarantees to hosted applications. In this paper, we design and implement a novel coordinated partitioning technique of the database buffer pool and storage cache between applications for any given cache replacement policy and per-application access pattern. We use statistical regression to dynamically determine the mapping between cache quota settings and the resulting per-application QoS. A resource controller embedded within the database engine actuates the partitioning of the two-level cache, converging towards the configuration with maximum application utility, expressed as the service provider revenue in that configuration, based on a set of latency sample points. Our experimental evaluation, using the MySQL database engine, a server farm with consolidated storage, and two e-commerce benchmarks, shows the effectiveness of our technique in enforcing application QoS, as well as maximizing the revenue of the service provider in shared server farms.
MC2: Multiple Clients on a Multilevel Cache In today's networked storage environment, it is common to have a hierarchy of caches where the lower levels of the hierarchy are accessed by multiple clients. This sharing can have both positive or negative effects. While data fetched by one client can be used by another client without incurring additional delays, clients competing for cache buffers can evict each other's blocks and interfere with exclusive caching schemes. Our algorithm, MC2, combines local, per client management with a global, system-wide, scheme, to emphasize the positive effects of sharing and reduce the negative ones. The local scheme uses readily available information about the client's future access profile to save the most valuable blocks, and to choose the best replacement policy for them. The global scheme uses the same information to divide the shared cache space between clients, and to manage this space. Exclusive caching is maintained for non-shared data and is disabled when sharing is identified. Our simulation results show that the combined algorithm significantly reduces the overall I/O response times of the system.
A comparison of high-availability media recovery techniques We compare two high-availability techniques for recovery from media failures in database systems. Both techniques achieve high availability by having two copies of all data and indexes, so that recovery is immediate. “Mirrored declustering” spreads two copies of each relation across two identical sets of disks. “Interleaved declustering” spreads two copies of each relation across one set of disks while keeping both copies of each tuple on separate disks. Both techniques pay the same costs of doubling storage requirements and requiring updates to be applied to both copies.Mirroring offers greater simplicity and universality. Recovery can be implemented at lower levels of the system software (e.g., the disk controller). For architectures that do not share disks globally, it allows global and local cluster indexes to be independent. Also, mirroring does not require data to be declustered (i.e., spread over multiple disks).Interleaved declustering offers significant improvements in recovery time, mean time to loss of both copies of some data, throughput during normal operation, and response time during recovery. For all architectures, interleaved declustering enables data to be spread over twice as many disks for improved load balancing. We show how tuning for interleaved declustering is simplified because it is dependent only on a few parameters that are usually well known for a specific workload and system configuration.
Parity logging disk arrays Parity-encoded redundant disk arrays provide highly reliable, cost-effective secondary storage with high performance for reads and large writes. Their performance on small writes, however, is much worse than mirrored disks—the traditional, highly reliable, but expensive organization for secondary storage. Unfortunately, small writes are a substantial portion of the I/O workload of many important, demanding applications such as on-line transaction processing. This paper presents parity logging, a novel solution to the small-write problem for redundant disk arrays. Parity logging applies journalling techniques to reduce substantially the cost of small writes. We provide detailed models of parity logging and competing schemes—mirroring, floating storage, and RAID level 5—and verify these models by simulation. Parity logging provides performance competitive with mirroring, but with capacity overhead close to the minimum offered by RAID level 5. Finally, parity logging can exploit data caching more effectively than all three alternative approaches.
Analytic Modeling and Comparisons of Striping Strategies for Replicated Disk Arrays Data replication has been widely used as a means of increasing the data availability for critical applications in the event of disk failure. There are different ways of organizing the two copies of the data across a disk array. This paper compares strategies for striping data of the two copies in the context of database applications. By keeping both copies active, we explore strategies that can take advantage of the additional copy to improve not only availability, but also performance during both normal and failure modes. We consider the effects of small and large stripe sizes on the performance of disk arrays with two active copies of data under a mixed workload of queries and transactions with a skewed access pattern. We propose a dual (hybrid) striping strategy which uses different stripe sizes for the two copies and a disk queuing policy designed to exploit this organization for optimal performance. An analytical model is devised for this scheme, by treating the individual disks as independent, and applying an M/G/1 queuing model. Disks on which a large query scan is running are modeled by a variation of the queue with permanent customers, which leads to an iterative functional equation for the query scan delay distribution. A solution for this equation is given. The results are validated against simulations and are shown to match well. Comparison with uniform striping strategies show that the dual striping scheme yields the most stable performance in a variety of workloads, out-performing the uniform striping strategy using either mirrored or chained declustering under both normal and failure mode operations.
Scheduling algorithms for modern disk drives Disk subsystem performance can be dramatically improved by dynamically ordering, or scheduling, pending requests. Via strongly validated simulation, we examine the impact of complex logical-to-physical mappings and large prefetching caches on scheduling effectiveness. Using both synthetic workloads and traces captured from six different user environments, we arrive at three main conclusions: (1) Incorporating complex mapping information into the scheduler provides only a marginal (less than 2%) decrease in response times for seek-reducing algorithms. (2) Algorithms which effectively utilize prefetching disk caches provide significant performance improvements for workloads with read sequentiality. The cyclical scan algorithm (C-LOOK), which always schedules requests in ascending logical order, achieves the highest performance among seek-reducing algorithms for such workloads. (3) Algorithms that reduce overall positioning delays produce the highest performance provided that they recognize and exploit a prefetching cache.
Fault Tolerance Issues in Data Declustering for Parallel Database Systems Maintaining the integrity of data and its accessibility are crucial tasks in database systems. Althougheach component in the storage hierarchy can be fairly reliable, a large collection of suchcomponents is prone to failure; this is especially true of the secondary storage system whichnormally contains a large number of magnetic disks. In designing a fault tolerant secondarystorage system, one should keep in mind that failures, although potentially devastating, are expectedto occur fairly...
Adaptive database buffer allocation using query feedback In this paper, we propose the concept of using query execution feedback for improving database buffer man- agement. A query feedback model which adaptively quantifies the page fault characteristics of all query ac- cess patterns including sequential, looping and most im- portantly random, is defined. Based on this model, a load control and a marginal gain ratio buffer allocation scheme are developed. Simulatidn experiments show that the proposed method is consistently better than the previous methods and in most cases, it significantly outperforms all other methods for random access refer- ence patterns.
B-tree indexes for high update rates In some applications, data capture dominates query processing. For example, monitoring moving objects often requires more insertions and updates than queries. Data gathering using automated sensors often exhibits this imbalance. More generally, indexing streams is considered an unsolved problem.For those applications, B-tree indexes are good choices if some trade-off decisions are tilted towards optimization of updates rather than towards optimization of queries. This paper surveys some techniques that let B-trees sustain very high update rates, up to multiple orders of magnitude higher than traditional B-trees, at the expense of query processing performance. Not surprisingly, some of these techniques are reminiscent of those employed during index creation, index rebuild, etc., while other techniques are derived from well known technologies such as differential files and log-structured file systems.
Analyzing Drum Patterns Using Conditional Deep Belief Networks.
An efficient scheme for providing high availability Replication at the partition level is a promising approach for increasing availability in a Shared Nothing architecture. We propose an algorithm for maintaining replicas with little overhead during normal failure-free processing. Our mechanism updates the secondary replica in an asynchronous manner: entire dirty pages are sent to the secondary at some time before they are discarded from primary's buffer. A log server node (hardened against failures) maintains the log for each node. If a primary node fails, the secondary fetches the log from the log server, applied it to its replica, and brings itself to the primary's last transaction-consistent state. We study the performance of various policies for sending pages to secondary and the corresponding trade-offs between recovery time and overhead during failure-free processing.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.077909
0.010002
0.008889
0.004706
0.003015
0.000867
0.000321
0.000138
0.000041
0.000005
0
0
0
0
Metric forests based on Gaussian mixture model for visual image classification. Visual image classification plays an important role in computer vision and pattern recognition. In this paper, a new random forests method called metric forests is suggested. This method takes the distribution of datasets (including the original dataset and bootstrapped ones) into full consideration. The proposed method exploits the distribution similarity between the original dataset and the bootstrapped datasets. For each bootstrapped dataset, a metric decision tree is built based on Gaussian mixture model. The metric decision tree learned from bootstrapped dataset with a low or high similarity index is given small weight when voting, vice versa. The contribution of the proposed method is originated from that the dataset with low similarity may not represent the original dataset very well while the high one with a big chance to overfit. To evaluate the proposed metric forests method, extensive of experiments was conducted for visual image classification including texture image classification, flower image classification and food image classification. The experimental results validated the superiority of the proposed metric forests on the ALOT, Flower-102 and Food-101 datasets.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On the Complexity of Neural Network Classifiers: A Comparison Between Shallow and Deep Architectures Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.
Lower Bounds on Complexity of Shallow Perceptron Networks. Model complexity of shallow (one-hidden-layer) perceptron networks computing multivariable functions on finite domains is investigated. Lower bounds are derived on growth of the number of network units or sizes of output weights in terms of variations of functions to be computed. A concrete construction of a class of functions which cannot be computed by percetron networks with considerably smaller numbers of units and output weights than the sizes of the function's domains is presented. In particular, functions on Boolean d-dimensional cubes are constructed which cannot be computed by shallow perceptron networks with numbers of hidden units and sizes of output weights depending on d polynomially. A subclass of these functions is described whose elements can be computed by two-hidden-layer networks with the number of units depending on d linearly.
Model complexities of shallow networks representing highly varying functions Model complexities of shallow (i.e., one-hidden-layer) networks representing highly varying multivariable { - 1 , 1 } -valued functions are studied in terms of variational norms tailored to dictionaries of network units. It is shown that bounds on these norms define classes of functions computable by networks with constrained numbers of hidden units and sizes of output weights. Estimates of probabilistic distributions of values of variational norms with respect to typical computational units, such as perceptrons and Gaussian kernel units, are derived via geometric characterization of variational norms combined with the probabilistic Chernoff Bound. It is shown that almost any randomly chosen { - 1 , 1 } -valued function on a sufficiently large d-dimensional domain has variation with respect to perceptrons depending on d exponentially.
Complexity of Shallow Networks Representing Functions with Large Variations.
Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene. Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for succe...
Sparseness Analysis in the Pretraining of Deep Neural Networks A major progress in deep multilayer neural networks (DNNs) is the invention of various unsupervised pretraining methods to initialize network parameters which lead to good prediction accuracy. This paper presents the sparseness analysis on the hidden unit in the pretraining process. In particular, we use the L₁-norm to measure sparseness and provide some sufficient conditions for that pretraining leads to sparseness with respect to the popular pretraining models--such as denoising autoencoders (DAEs) and restricted Boltzmann machines (RBMs). Our experimental results demonstrate that when the sufficient conditions are satisfied, the pretraining models lead to sparseness. Our experiments also reveal that when using the sigmoid activation functions, pretraining plays an important sparseness role in DNNs with sigmoid (Dsigm), and when using the rectifier linear unit (ReLU) activation functions, pretraining becomes less effective for DNNs with ReLU (Drelu). Luckily, Drelu can reach a higher recognition accuracy than DNNs with pretraining (DAEs and RBMs), as it can capture the main benefit (such as sparseness-encouraging) of pretraining in Dsigm. However, ReLU is not adapted to the different firing rates in biological neurons, because the firing rate actually changes along with the varying membrane resistances. To address this problem, we further propose a family of rectifier piecewise linear units (RePLUs) to fit the different firing rates. The experimental results show that the performance of RePLU is better than ReLU, and is comparable with those with some pretraining techniques, such as RBMs and DAEs.
Deep and Shallow Architecture of Multilayer Neural Networks This paper focuses on the deep and shallow architecture of multilayer neural networks (MNNs). The demonstration of whether or not an MNN can be replaced by another MNN with fewer layers is equivalent to studying the topological conjugacy of its hidden layers. This paper provides a systematic methodology to indicate when two hidden spaces are topologically conjugated. Furthermore, some criteria are presented for some specific cases.
Understanding the difficulty of training deep feedforward neural networks Whereas before 2006 it appears that deep multilayer neural networks were not successfully trained, since then several algorithms have been shown to successfully train them, with experimental results showing the superiority of deeper vs less deep architectures. All these experimental results were obtained with new initialization or training mechanisms. Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future. We first observe the influence of the non-linear activations functions. We find that the logistic sigmoid activation is unsuited for deep networks with random initialization because of its mean value, which can drive especially the top hidden layer into saturation. Surprisingly, we find that saturated units can move out of saturation by themselves, albeit slowly, and explaining the plateaus sometimes seen when training neural networks. We find that a new non-linearity that saturates less can often be beneficial. Finally, we study how activations and gradients vary across layers and during training, with the idea that training may be more difficult when the singular values of the Jacobian associated with each layer are far from 1. Based on these considerations, we propose a new initialization scheme that brings substantially faster convergence. 1 Deep Neural Networks Deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower level features. They include Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: WC Weston et al., 2008). Much attention has recently been devoted to them (see (Bengio, 2009) for a review), because of their theoretical appeal, inspiration from biology and human cognition, and because of empirical success in vision (Ranzato et al., 2007; Larochelle et al., 2007; Vincent et al., 2008) and natural language processing (NLP) (Collobert \u0026 Weston, 2008; Mnih \u0026 Hinton, 2009). Theoretical results reviewed and discussed by Bengio (2009), suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need deep architectures. Most of the recent experimental results with deep architecture are obtained with models that can be turned into deep supervised neural networks, but with initialization or training schemes different from the classical feedforward neural networks (Rumelhart et al., 1986). Why are these new algorithms working so much better than the standard random initialization and gradient-based optimization of a supervised training criterion? Part of the answer may be found in recent analyses of the effect of unsupervised pretraining (Erhan et al., 2009), showing that it acts as a regularizer that initializes the parameters in a “better” basin of attraction of the optimization procedure, corresponding to an apparent local minimum associated with better generalization. But earlier work (Bengio et al., 2007) had shown that even a purely supervised but greedy layer-wise procedure would give better results. So here instead of focusing on what unsupervised pre-training or semi-supervised criteria bring to deep architectures, we focus on analyzing what may be going wrong with good old (but deep) multilayer neural networks. Our analysis is driven by investigative experiments to monitor activations (watching for saturation of hidden units) and gradients, across layers and across training iterations. We also evaluate the effects on these of choices of activation function (with the idea that it might affect saturation) and initialization procedure (since unsupervised pretraining is a particular form of initialization and it has a drastic impact).
Learning long-term dependencies with gradient descent is difficult Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered.
Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription. We investigate the problem of modeling symbolic sequences of polyphonic music in a completely general piano-roll representation. We introduce a probabilistic model based on distribution estimators conditioned on a recurrent neural network that is able to discover temporal dependencies in high-dimensional sequences. Our approach outperforms many traditional models of polyphonic music on a variety of realistic datasets. We show how our musical language model can serve as a symbolic prior to improve the accuracy of polyphonic transcription.
Simultaneous Localization And Mapping With Sparse Extended Information Filters In this paper we describe a scalable algorithm for the simultaneous mapping and localization (SLAM) problem. SLAM is the problem of acquiring a map of a static environment with a mobile robot. The vast majority of SLAM algorithms are based on the extended Kahnan filter (EKF), In this paper we advocate an algorithm that relies on the dual of the EKE the extended information filter (EIF). We show that when represented in the information form, map posteriors are dominated by a small number of links that tie together nearby features in the map. This insight is developed into a sparse variant of the EIF called the sparse extended information filter (SEIF). SEIFs represent maps by graphical networks of features that are locally interconnected, where links represent relative information between pairs of nearby,features, as well as information about the robot's pose relative to the map. We show that all essential update equations in SEIFs can be executed in constant time, irrespective of the size of the map. We also provide empirical results obtained for a benchmark data set collected in an outdoor environment, and using a multi-robot mapping simulation.
Probabilistic Situation Calculus In this article we propose a Probabilistic Situation Calculus logical language to represent and reason with knowledge about dynamic worlds in which actions have uncertain effects. Uncertain effects are modeled by dividing an action into two subparts: a deterministic (agent produced) input and a probabilistic reaction (produced by nature). We assume that the probabilities of the reactions have known distributions.Our logical language is an extension to Situation Calculae in the style proposed by Raymond Reiter. There are three aspects to this work. First, we extend the language in order to accommodate the necessary distinctions (e.g., the separation of actions into inputs and reactions). Second, we develop the notion of Randomly Reactive Automata in order to specify the semantics of our Probabilistic Situation Calculus. Finally, we develop a reasoning system in MATHEMATICA capable of performing temporal projection in the Probabilistic Situation Calculus.
An action-based approach to the formal specification and automatic analysis of business processes under authorization constraints Business processes under authorization control are sets of coordinated activities subject to a security policy stating which agent can access which resource. Their behavior is difficult to predict due to the complex and unexpected interleaving of different execution flows within the process. Serious flaws may thus go undetected and manifest themselves only after deployment. For this reason, business processes are being considered a new, promising application domain for formal methods and model checking techniques in particular. In this paper we show that action-based languages provide a rich and natural framework for the formal specification of and automated reasoning about business processes under authorization constraints. We do this by discussing the application of the action language C to the specification of a business process from the banking domain that is representative of an important class of business processes of practical relevance. Furthermore we show that a number of reasoning tasks that arise in this context (namely checking whether the control flow together with the security policy meets the expected security properties, building a security policy for the given business process under given security requirements, and finding an allocation of tasks to agents that guarantees the completion of the business process) can be carried out automatically using the Causal Calculator CCalc. We also compare C with the prominent specification language used in model-checking.
Optimizing large data transfers in parity-declustered data layouts. Disk arrays allow faster access to users' data by distributing the data among a collection of disks and allowing parallel access. Fault tolerance in a disk array can be achieved by using a data layout, and the technique of parity declustering allows faster failure recovery at the cost of additional space dedicated to redundant information. A collection of six performance conditions that parity-declustered data layouts should satisfy has guided most previous work; however two of these conditions (Maximal parallelism and Large write optimization) cannot be jointly satisfied in most cases. This limits the ability of parity-declustered data layouts to take full advantage of the available parallelism during large data transfers. We present data layouts that approximately satisfy these two conditions simultaneously for all possible array configurations, and bound the deviations from complete satisfaction. Our results yield improved performance guarantees for large data transfers in parity-declustered data layouts.
1.011338
0.01325
0.012887
0.01244
0.01
0.005
0.001667
0.000138
0.000015
0.000002
0
0
0
0
Deep Generative Stochastic Networks Trainable by Backprop. We introduce a novel training principle for probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution of the Markov chain is conditional on the previous state, generally involving a small move, so this conditional distribution has fewer dominant modes, being unimodal in the limit of small moves. Thus, it is easier to learn because it is easier to approximate its partition function, more like learning to perform supervised function approximation, with gradients that can be obtained by backprop. We provide theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood, along with a definition of an appropriate joint distribution and sampling mechanism even when the conditionals are not consistent. GSNs can be used with missing inputs and can be used to sample subsets of variables given the rest. We validate these theoretical results with experiments on two image datasets using an architecture that mimics the Deep Boltzmann Machine Gibbs sampler but allows training to proceed with simple backprop, without the need for layerwise pretraining.
Multimodal Transitions for Generative Stochastic Networks. Generative Stochastic Networks (GSNs) have been recently introduced as an alternative to traditional probabilistic modeling: instead of parametrizing the data distribution directly, one parametrizes a transition operator for a Markov chain whose stationary distribution is an estimator of the data generating distribution. The result of training is therefore a machine that generates samples through this Markov chain. However, the previously introduced GSN consistency theorems suggest that in order to capture a wide class of distributions, the transition operator in general should be multimodal, something that has not been done before this paper. We introduce for the first time multimodal transition distributions for GSNs, in particular using models in the NADE family (Neural Autoregressive Density Estimator) as output distributions of the transition operator. A NADE model is related to an RBM (and can thus model multimodal distributions) but its likelihood (and likelihood gradient) can be computed easily. The parameters of the NADE are obtained as a learned function of the previous state of the learned Markov chain. Experiments clearly illustrate the advantage of such multimodal transition distributions over unimodal GSNs.
Transforming Exploratory Creativity with DeLeNoX We introduce DeLeNoX (Deep Learning Novelty Explorer), a system that autonomously creates artifacts in constrained spaces according to its own evolving interestingness criterion. DeLeNoX proceeds in alternating phases of exploration and transformation. In the exploration phases, a version of novelty search augmented with constraint handling searches for maximally diverse artifacts using a given distance function. In the transformation phases, a deep learning autoencoder learns to compress the variation between the found artifacts into a lower-dimensional space. The newly trained encoder is then used as the basis for a new distance function, transforming the criteria for the next exploration phase. In the current paper, we apply DeLeNoX to the creation of spaceships suitable for use in two-dimensional arcade-style computer games, a representative problem in procedural content generation in games. We also situate DeLeNoX in relation to the distinction between exploratory and transformational creativity, and in relation to Schmidhuber's theory of creativity through the drive for compression progress.
Memory Bounded Deep Convolutional Networks. In this work, we investigate the use of sparsity-inducing regularizers during training of Convolution Neural Networks (CNNs). These regularizers encourage that fewer connections in the convolution and fully connected layers take non-zero values and in effect result in sparse connectivity between hidden units in the deep network. This in turn reduces the memory and runtime cost involved in deploying the learned CNNs. We show that training with such regularization can still be performed using stochastic gradient descent implying that it can be used easily in existing codebases. Experimental evaluation of our approach on MNIST, CIFAR, and ImageNet datasets shows that our regularizers can result in dramatic reductions in memory requirements. For instance, when applied on AlexNet, our method can reduce the memory consumption by a factor of four with minimal loss in accuracy.
The student-t mixture as a natural image patch prior with application to image compression Recent results have shown that Gaussian mixture models (GMMs) are remarkably good at density modeling of natural image patches, especially given their simplicity. In terms of log likelihood on real-valued data they are comparable with the best performing techniques published, easily outperforming more advanced ones, such as deep belief networks. They can be applied to various image processing tasks, such as image denoising, deblurring and inpainting, where they improve on other generic prior methods, such as sparse coding and field of experts. Based on this we propose the use of another, even richer mixture model based image prior: the Student-t mixture model (STM). We demonstrate that it convincingly surpasses GMMs in terms of log likelihood, achieving performance competitive with the state of the art in image patch modeling. We apply both the GMM and STM to the task of lossy and lossless image compression, and propose efficient coding schemes that can easily be extended to other unsupervised machine learning models. Finally, we show that the suggested techniques outperform JPEG, with results comparable to or better than JPEG 2000.
Convex Two-Layer Modeling. Latent variable prediction models, such as multi-layer networks, impose auxiliary latent variables between inputs and outputs to allow automatic inference of implicit features useful for prediction. Unfortunately, such models are difficult to train because inference over latent variables must be performed concurrently with parameter optimization---creating a highly non-convex problem. Instead of proposing another local training method, we develop a convex relaxation of hidden-layer conditional models that admits global training. Our approach extends current convex modeling approaches to handle two nested nonlinearities separated by a non-trivial adaptive latent layer. The resulting methods are able to acquire two-layer models that cannot be represented by any single-layer model over the same features, while improving training quality over local heuristics.
Two-layer contractive encodings for learning stable nonlinear features. Unsupervised learning of feature hierarchies is often a good strategy to initialize deep architectures for supervised learning. Most existing deep learning methods build these feature hierarchies layer by layer in a greedy fashion using either auto-encoders or restricted Boltzmann machines. Both yield encoders which compute linear projections of input followed by a smooth thresholding function. In this work, we demonstrate that these encoders fail to find stable features when the required computation is in the exclusive-or class. To overcome this limitation, we propose a two-layer encoder which is less restricted in the type of features it can learn. The proposed encoder is regularized by an extension of previous work on contractive regularization. This proposed two-layer contractive encoder potentially poses a more difficult optimization problem, and we further propose to linearly transform hidden neurons of the encoder to make learning easier. We demonstrate the advantages of the two-layer encoders qualitatively on artificially constructed datasets as well as commonly used benchmark datasets. We also conduct experiments on a semi-supervised learning task and show the benefits of the proposed two-layer encoders trained with the linear transformation of perceptrons.
Convolutional-Recursive Deep Learning for 3D Object Classification.
Shallow vs. Deep Sum-Product Networks. We investigate the representational power of sum-product networks (computation networks analogous to neural networks, but whose individual units compute either products or weighted sums), through a theoretical analysis that compares deep (multiple hidden layers) vs. shallow (one hidden layer) architectures. We prove there exist families of functions that can be represented much more efficiently with a deep network than with a shallow one, i.e. with substantially fewer hidden units. Such results were not available until now, and contribute to motivate recent research involving learning of deep sum-product networks, and more generally motivate research in Deep Learning.
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning. Deep Learning has recently become hugely popular in machine learning for its ability to solve end-to-end learning systems, in which the features and the classifiers are learned simultaneously, providing significant improvements in classification accuracy in the presence of highly-structured and large databases. Its success is due to a combination of recent algorithmic breakthroughs, increasingly powerful computers, and access to significant amounts of data. Researchers have also considered privacy implications of deep learning. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users' private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy (DP) to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS'15. Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data). Interestingly, we show that record-level differential privacy applied to the shared parameters of the model, as suggested in previous work, is ineffective (i.e., record-level DP is not designed to address our attack).
A Cost-effective Near-line Storage Server for Multimedia System We consider a storage server architecture for multimedia information systems. While most other works on multimedia storage servers assume on-line disk storage, we consider a two-tier storage architecture with a robotic tape library as the vast near-line storage and on-line disks as the front-line storage. Magnetic tapes are cheaper, more robust, and have a larger capacity; hence they are more cost effective for large scale storage systems (e.g., video on demand (VOD) systems may store tens of thousands of videos). We study in detail the design issues of the tape subsystem and propose some novel tape scheduling algorithms which give faster response and require less disk buffering.
Publishing: a reliable broadcast communication mechanism Publishing is a model and mechanism for crash recovery in a distributed computing environment. Published communication works for systems connected via a broadcast medium by recording messages transmitted over the network. The recovery mechanism can be completely transparent to the failed process and all processes interacting with it. Although published communication is intended for a broadcast network such as a bus, a ring, or an Ethernet, it can be used in other environments. A recorder reliably stores all messages that are transmitted, as well as checkpoint and recovery information. When it detects a failure, the recorder may restart affected processes from checkpoints. The recorder subsequently resends to each process all messages which were sent to it since the time its checkpoint was taken, while ignoring duplicate messages sent by it. Message-based systems without shared memory can use published communications to recover groups of processes. Simulations show that at least 5 multi-user minicomputers can be supported on a standard Ethernet using a single recorder. The prototype version implemented in DEMOS/MP demonstrates that an error recovery can be transparent to user processes and can be centralized in the network.
LH*g: a high-availability scalable distributed data structure by record grouping LH*g (Linear Hashing by grouping) is a high-availability extension of the LH* scalable distributed data structure. An LH*g file scales up with constant key search and insert performance, while surviving any single-site unavailability (failure). We achieve high availability through a new principle of record grouping. A group is a logical structure of up to k records, where k is a file parameter. Every group contains a parity record allowing for the reconstruction of an unavailable member. The basic scheme may be generalized to support the unavailability of any number of sites, at the expense of storage and messaging. Other known high-availability schemes are static, or require more storage, or provide worse search performance
Gpu Accelerated Svm With Sparse Sliced Ellr-T Matrix Format This paper presents the SECu-SVM algorithm for solving classification problems. It allows for a significant acceleration of the standard SVM implementations by transferring the most time-consuming computations from the standard CPU to the Graphics Processor Units (GPU). In addition, highly efficient Sliced EllR-T sparse matrix format was used for storing the dataset in GPU memory, which requires a very low memory footprint and is also well adapted to parallel processing. Performed experiments demonstrate an acceleration of 4-100 times over LibSVM. Moreover, in the majority of cases the SECu-SVM is less time-consuming than the best sparse GPU implementations and allows for handling significantly larger classification datasets.
1.01498
0.014541
0.01375
0.01339
0.0125
0.006285
0.0025
0.001053
0.00011
0.000009
0
0
0
0
The Boolean Hierarchy over Level 1/2 of the Straubing-Therien Hierarchy For some fixed alphabet A with |A| ≥ 2, a language L ⊆ A∗ is in the class L1/2 of the Straubing-Therien hierarchy if and only if it can be expressed as a finite union of languages A∗a1A∗a2A∗ � � � A∗anA∗, where ai ∈ A and n ≥ 0. The class L1 is defined as the boolean closure of L1/2. It is known that the classes L1/2 and L1 are decidable. We give a membership criterion for the single classes of the boolean hierarchy ov er L1/2. From this criterion we can conclude that this boolean hierarchy is proper and that its c lasses are decidable. In finite model theory the latter implies the decidability of the classes of the boolean hierarchy over the class �1 of the FO(<)-logic. Moreover we prove a "forbidden-pattern" characterization of L1 of the type: L ∈ L1 if and only if a certain pattern does not appear in the transit ion graph of a deterministic finite automaton accepting L. We discuss complexity theoretical consequences of our results. Classification: finite automata, concatenation hierarchies, boolean hiera rchy, decidability
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Exploring Gate-Limited Analytical Models for High Performance Network Storage Servers
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A-Wristocracy: Deep learning on wrist-worn sensing for recognition of user complex activities In this work we present A-Wristocracy, a novel framework for recognizing very fine-grained and complex inhome activities of human users (particularly elderly people) with wrist-worn device sensing. Our designed A-Wristocracy system improves upon the state-of-the-art works on in-home activity recognition using wearables. These works are mostly able to detect coarse-grained ADLs (Activities of Daily Living) but not large number of fine-grained and complex IADLs (Instrumental Activities of Daily Living). These are also not able to distinguish similar activities but with different context (such as sit on floor vs. sit on bed vs. sit on sofa). Our solution helps accurate detection of in-home ADLs/ IADLs and contextual activities, which are all critically important for remote elderly care in tracking their physical and cognitive capabilities. A-Wristocracy makes it feasible to classify large number of fine-grained and complex activities, through Deep Learning based data analytics and exploiting multi-modal sensing on wrist-worn device. It exploits minimal functionality from very light additional infrastructure (through only few Bluetooth beacons), for coarse level location context. A-Wristocracy preserves direct user privacy by excluding camera/ video imaging on wearable or infrastructure. The classification procedure consists of practical feature set extraction from multi-modal wearable sensor suites, followed by Deep Learning based supervised fine-level classification algorithm. We have collected exhaustive home-based ADLs and IADLs data from multiple users. Our designed classifier is validated to be able to recognize very fine-grained complex 22 daily activities (much larger number than 6-12 activities detected by state-of-the-art works using wearable and no camera/ video) with high average test accuracies of 90% or more for two users in two different home environments.
On-line deep learning method for action recognition. In this paper an unsupervised on-line deep learning algorithm for action recognition in video sequences is proposed. Deep learning models capable of deriving spatio-temporal data have been proposed in the past with remarkable results, yet, they are mostly restricted to building features from a short window length. The model presented here, on the other hand, considers the entire sample sequence and extracts the description in a frame-by-frame manner. Each computational node of the proposed paradigm forms clusters and computes point representatives, respectively. Subsequently, a first-order transition matrix stores and continuously updates the successive transitions among the clusters. Both the spatial and temporal information are concurrently treated by the Viterbi Algorithm, which maximizes a criterion based upon (a) the temporal transitions and (b) the similarity of the respective input sequence with the cluster representatives. The derived Viterbi path is the node’s output, whereas the concatenation of nine vicinal such paths constitute the input to the corresponding upper level node. The engagement of ART and the Viterbi Algorithm in a Deep learning architecture, here, for the first time, leads to a substantially different approach for action recognition. Compared with other deep learning methodologies, in most cases, it is shown to outperform them, in terms of classification accuracy.
A Framework For Selecting Deep Learning Hyper-Parameters Recent research has found that deep learning architectures show significant improvements over traditional shallow algorithms when mining high dimensional datasets. When the choice of algorithm employed, hyper-parameter setting, number of hidden layers and nodes within a layer are combined, the identification of an optimal configuration can be a lengthy process. Our work provides a framework for building deep learning architectures via a stepwise approach, together with an evaluation methodology to quickly identify poorly performing architectural configurations. Using a dataset with high dimensionality, we illustrate how different architectures perform and how one algorithm configuration can provide input for fine-tuning more complex models.
A Novel Feature Extraction Method for Scene Recognition Based on Centered Convolutional Restricted Boltzmann Machines. Scene recognition is an important research topic in computer vision, while feature extraction is a key step of scene recognition. Although classical Restricted Boltzmann Machines (RBM) can efficiently represent complicated data, it is hard to handle large images due to its complexity in computation. In this paper, a novel feature extraction method, named Centered Convolutional Restricted Boltzmann Machines (CCRBM), is proposed for scene recognition. The proposed model improves the Convolutional Restricted Boltzmann Machines (CRBM) by introducing centered factors in its learning strategy to reduce the source of instabilities. First, the visible units of the network are redefined using centered factors. Then, the hidden units are learned with a modified energy function by utilizing a distribution function, and the visible units are reconstructed using the learned hidden units. In order to achieve better generative ability, the Centered Convolutional Deep Belief Networks (CCDBN) is trained in a greedy layer-wise way. Finally, a softmax regression is incorporated for scene recognition. Extensive experimental evaluations on the datasets of natural scenes, MIT-indoor scenes, MIT-Places 205, SUN 397, Caltech 101, CIFAR-10, and NORB show that the proposed approach performs better than its counterparts in terms of stability, generalization, and discrimination. The CCDBN model is more suitable for natural scene image recognition by virtue of convolutional property.
High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning. High-dimensional problem domains pose significant challenges for anomaly detection. The presence of irrelevant features can conceal the presence of anomalies. This problem, known as the ‘curse of dimensionality’, is an obstacle for many anomaly detection techniques. Building a robust anomaly detection model for use in high-dimensional spaces requires the combination of an unsupervised feature extractor and an anomaly detector. While one-class support vector machines are effective at producing decision surfaces from well-behaved feature vectors, they can be inefficient at modelling the variation in large, high-dimensional datasets. Architectures such as deep belief networks (DBNs) are a promising technique for learning robust features. We present a hybrid model where an unsupervised DBN is trained to extract generic underlying features, and a one-class SVM is trained from the features learned by the DBN. Since a linear kernel can be substituted for nonlinear ones in our hybrid model without loss of accuracy, our model is scalable and computationally efficient. The experimental results show that our proposed model yields comparable anomaly detection performance with a deep autoencoder, while reducing its training and testing time by a factor of 3 and 1000, respectively.
A restricted Boltzmann machine based two-lead electrocardiography classification An restricted Boltzmann machine learning algorithm were proposed in the two-lead heart beat classification problem. ECG classification is a complex pattern recognition problem. The unsupervised learning algorithm of restricted Boltzmann machine is ideal in mining the massive unlabelled ECG wave beats collected in the heart healthcare monitoring applications. A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs. In this paper a deep belief network was constructed and the RBM based algorithm was used in the classification problem. Under the recommended twelve classes by the ANSI/AAMI EC57: 1998/(R)2008 standard as the waveform labels, the algorithm was evaluated on the two-lead ECG dataset of MIT-BIH and gets the performance with accuracy of 98.829%. The proposed algorithm performed well in the two-lead ECG classification problem, which could be generalized to multi-lead unsupervised ECG classification or detection problems.
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.
Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented.
Parameterized Complexity: The Main Ideas and Some Research Frontiers The purposes of this paper are two: (1) To give an exposition of the main ideas of parameterized complexity, and (2) To discuss some of the current research frontiers and directions.
The computational complexity of propositional STRIPS planning I present several computational complexity results for propositional STRIPS planning, i.e.,STRIPS planning restricted to ground formulas. Different planning problems can be definedby restricting the type of formulas, placing limits on the number of pre- and postconditions,by restricting negation in pre- and postconditions, and by requiring optimal plans. For thesetypes of restrictions, I show when planning is tractable (polynomial) and intractable (NPhard). In general, it is...
Improving RAID Performance Using a Multibuffer Technique RAID (redundant array of inexpensive disks) offers high performance for read accesses and large writes to many consecutive blocks. On small writes, however, it entails large penalties. Two approaches have been proposed to address this problem:1. The first approach records the update information on a separate log disk, and only brings the affected parity blocks to the consistent state when the system is idle. This strategy increases the chance of disk failure due to the additional log disks. Furthermore, heavy system loads for an extended period of time can overflow the log disks and cause sudden disastrous performance.2. The second approach avoids the above problems by grouping the updated blocks into new stripes and writing them as large writes. Unfortunately, this strategy improves write performance on the expense of read operations. After many updates, a set of logically consecutive data blocks can migrate to only a few disks making fetching them more expensive.In this paper, we improve on the second approach by eliminating its negative side effects. Our simulation results indicate that the existing scheme sometime performs worse than the standard RAID5 design. Our method is consistently better than either of these techniques.
Dynamic partitioning of the cache hierarchy in shared data centers Due to the imperative need to reduce the management costs of large data centers, operators multiplex several concurrent database applications on a server farm connected to shared network attached storage. Determining and enforcing per-application resource quotas in the resulting cache hierarchy, on the fly, poses a complex resource allocation problem spanning the database server and the storage server tiers. This problem is further complicated by the need to provide strict Quality of Service (QoS) guarantees to hosted applications. In this paper, we design and implement a novel coordinated partitioning technique of the database buffer pool and storage cache between applications for any given cache replacement policy and per-application access pattern. We use statistical regression to dynamically determine the mapping between cache quota settings and the resulting per-application QoS. A resource controller embedded within the database engine actuates the partitioning of the two-level cache, converging towards the configuration with maximum application utility, expressed as the service provider revenue in that configuration, based on a set of latency sample points. Our experimental evaluation, using the MySQL database engine, a server farm with consolidated storage, and two e-commerce benchmarks, shows the effectiveness of our technique in enforcing application QoS, as well as maximizing the revenue of the service provider in shared server farms.
The MHETA Execution Model for Heterogeneous Clusters The availability of inexpensive "off the shelf" machines increases the likelihood that parallel programs run on heterogeneous clusters of machines. These programs are increasingly likely to be out of core, meaning that portions of their datasets must be stored on disk during program execution. This results in significant, per-iteration, I/O cost.This paper describes an execution model, called MHETA, which is the key component to finding an effective data distribution on heterogeneous clusters. MHETA takes into account computation, communication, and I/O costs of iterative scientific applications. MHETA uses automatically extracted information from a single iteration to predict the execution time of the remaining iterations. Results show that MHETA predicts with on average 98% accuracy the execution time of several scientific benchmarks (with and without prefetching) and one full-scale scientific program that utilize pipelined and other communication. MHETA is thus an effective tool when searching for the most effective distribution on a heterogeneous cluster.
"The sum of all human knowledge": A systematic review of scholarly research on the content of Wikipedia AbstractWikipedia may be the best-developed attempt thus far to gather all human knowledge in one place. Its accomplishments in this regard have made it a point of inquiry for researchers from different fields of knowledge. A decade of research has thrown light on many aspects of the Wikipedia community, its processes, and its content. However, due to the variety of fields inquiring about Wikipedia and the limited synthesis of the extensive research, there is little consensus on many aspects of Wikipedia's content as an encyclopedic collection of human knowledge. This study addresses the issue by systematically reviewing 110 peer-reviewed publications on Wikipedia content, summarizing the current findings, and highlighting the major research trends. Two major streams of research are identified: the quality of Wikipedia content including comprehensiveness, currency, readability, and reliability and the size of Wikipedia. Moreover, we present the key research trends in terms of the domains of inquiry, research design, data source, and data gathering methods. This review synthesizes scholarly understanding of Wikipedia content and paves the way for future studies.
1.2
0.2
0.2
0.1
0.066667
0.04
0.001156
0
0
0
0
0
0
0
Observability-Based Nested Belief Computation for Multiagent Systems and its Formalization Some agent architectures employ mental states such as belief, desire, goal, and intention. We also know that one often has a belief about someone else’s belief (nested belief), and one’s action is decided based on the nested belief. However, to the best of our knowledge, there is no concrete agent architecture that employs nested beliefs for decision. The reason is simple: we do not have a good model of nested belief change. Hence, interesting technological questions are whether such a model can be devised or not, how it can be implemented, and how it can be used. In a previous paper, we proposed an algorithm for nested beliefs based on observability and logically characterized its output. Here, we propose another algorithm with improved expressiveness and efficiency.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Modeling Mutual Visibility Relationship in Pedestrian Detection Detecting pedestrians in cluttered scenes is a challenging problem in computer vision. The difficulty is added when several pedestrians overlap in images and occlude each other. We observe, however, that the occlusion/visibility statuses of overlapping pedestrians provide useful mutual relationship for visibility estimation - the visibility estimation of one pedestrian facilitates the visibility estimation of another. In this paper, we propose a mutual visibility deep model that jointly estimates the visibility statuses of overlapping pedestrians. The visibility relationship among pedestrians is learned from the deep model for recognizing co-existing pedestrians. Experimental results show that the mutual visibility deep model effectively improves the pedestrian detection results. Compared with existing image-based pedestrian detection approaches, our approach has the lowest average miss rate on the Caltech-Train dataset, the Caltech-Test dataset and the ETH dataset. Including mutual visibility leads to 4% - 8% improvements on multiple benchmark datasets.
Partial Occlusion Handling in Pedestrian Detection With a Deep Model. Part-based models have demonstrated their merit in object detection. However, there is a key issue to be solved on how to integrate the inaccurate scores of part detectors when there are occlusions, abnormal deformations, appearances, or illuminations. To handle the imperfection of part detectors, this paper presents a probabilistic pedestrian detection framework. In this framework, a deformable part-based model is used to obtain the scores of part detectors and the visibilities of parts are modeled as hidden variables. Once the occluded parts are identified, their effects are properly removed from the final detection score. Unlike previous occlusion handling approaches that assumed independence among the visibility probabilities of parts or manually defined rules for the visibility relationship, a deep model is proposed in this paper for learning the visibility relationship among overlapping parts at multiple layers. The proposed approach can be viewed as a general postprocessing of part-detection results and can take detection scores of existing part-based models as input. The experimental results on three public datasets (Caltech, ETH, and Daimler) and a new CUHK occlusion dataset (http://www.ee.cuhk.edu.hk/~xgwang/CUHK_pedestrian.html), which is specially designed for the evaluation of occlusion handling approaches, show the effectiveness of the proposed approach.
Joint Deep Learning for Pedestrian Detection Feature extraction, deformation handling, occlusion handling, and classification are four important components in pedestrian detection. Existing methods learn or design these components either individually or sequentially. The interaction among these components is not yet well explored. This paper proposes that they should be jointly learned in order to maximize their strengths through cooperation. We formulate these four components into a joint deep learning framework and propose a new deep network architecture. By establishing automatic, mutual interaction among components, the deep model achieves a 9% reduction in the average miss rate compared with the current best-performing pedestrian detection approaches on the largest Caltech benchmark dataset.
Pedestrian Detection with Spatially Pooled Features and Structured Ensemble Learning. Many typical applications of object detection operate within a prescribed false-positive range. In this situation the performance of a detector should be assessed on the basis of the area under the ROC curve over that range, rather than over the full curve, as the performance outside the prescribed range is irrelevant. This measure is labelled as the partial area under the ROC curve (pAUC). We pro...
Hybrid Deep Learning for Face Verification This paper proposes a hybrid convolutional network (ConvNet)-Restricted Boltzmann Machine (RBM) model for face verification in wild conditions. A key contribution of this work is to directly learn relational visual features, which indicate identity similarities, from raw pixels of face pairs with a hybrid deep network. The deep ConvNets in our model mimic the primary visual cortex to jointly extract local relational visual features from two face images compared with the learned filter pairs. These relational features are further processed through multiple layers to extract high-level and global features. Multiple groups of ConvNets are constructed in order to achieve robustness and characterize face similarities from different aspects. The top-layer RBM performs inference from complementary high-level features extracted from different ConvNet groups with a two-level average pooling hierarchy. The entire hybrid deep network is jointly fine-tuned to optimize for the task of face verification. Our model achieves competitive face verification performance on the LFW dataset.
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.
Deep Boltzmann Machines We present a new learning algorithm for Boltz- mann machines that contain many layers of hid- den variables. Data-dependent expectations are estimated using a variational approximation that tends to focus on a single mode, and data- independent expectations are approximated us- ing persistent Markov chains. The use of two quite different techniques for estimating the two types of expectation that enter into the gradient of the log-likelihood makes it practical to learn Boltzmann machines with multiple hidden lay- ers and millions of parameters. The learning can be made more efficient by using a layer-by-layer "pre-training" phase that allows variational in- ference to be initialized with a single bottom- up pass. We present results on the MNIST and NORB datasets showing that deep Boltzmann machines learn good generative models and per- form well on handwritten digit and visual object recognition tasks.
A two-layer ICA-like model estimated by score matching Capturing regularities in high-dimensional data is an important problem in machine learning and signal processing. Here we present a statistical model that learns a nonlinear representation from the data that reflects abstract, invariant properties of the signal without making requirements about the kind of signal that can be processed. The model has a hierarchy of two layers, with the first layer broadly corresponding to Independent Component Analysis (ICA) and a second layer to represent higher order structure. We estimate the model using the mathematical framework of Score Matching (SM), a novel method for the estimation of non-normalized statistical models. The model incorporates a squaring nonlinearity, which we propose to be suitable for forming a higher-order code of invariances. Additionally the squaring can be viewed as modelling subspaces to capture residual dependencies, which linear models cannot capture.
Unsupervised Learning of Models for Recognition We present a method to learn object class models from unlabeled and unsegmented cluttered scenes for the purpose of visual object recognition. We focus on a particular type of model where objects are represented as flexible constellations of rigid parts (features). The variability within a class is represented by a joint probability density function (pdf) on the shape of the constellation and the output of part detectors. In a first stage, the method automatically identifies distinctive parts in the training set by applying a clustering algorithm to patterns selected by an interest operator. It then learns the statistical shape model using expectation maximization. The method achieves very good classification results on human faces and rear views of cars.
Semantic text classification of disease reporting Traditional text classification studied in the IR literature is mainly based on topics. That is, each class or category represents a particular topic, e.g., sports, politics or sciences. However, many real-world text classification problems require more refined classification based on some semantic aspects. For example, in a set of documents about a particular disease, some documents may report the outbreak of the disease, some may describe how to cure the disease, some may discuss how to prevent the disease, and yet some others may include all the above information. To classify text at this semantic level, the traditional "bag of words" model is no longer sufficient. In this paper, we report a text classification study at the semantic level and show that sentence semantic and structure features are very useful for such kind of classification. Our experimental results based on a disease outbreak dataset demonstrated the effectiveness of the proposed approach.
Disk arrays: high-performance, high-reliability storage subsystems As the performance of other system components continues to improve rapidly, storage subsystem performance becomes increasingly important. Storage subsystem performance and reliability can be enhanced by logically grouping multiple disk drives into disk arrays. Array data organizations are defined by their data distribution schemes and redundancy mechanisms. The various combinations of these two components make disk arrays suitable for a wide range of environments. Many array implementation decisions also result in trade-offs between performance and reliability. Disk arrays are thus an essential tool for satisfying storage performance and reliability requirements, while proper selection of a data organization can tailor an array to a particular environment.<>
Fundamentals of fault-tolerant distributed computing in asynchronous environments Fault tolerance in distributed computing is a wide area with a significant body of literature that is vastly diverse in methodology and terminology. This paper aims at structuring the area and thus guiding readers into this interesting field. We use a formal approach to define important terms like fault, fault tolerance, and redundancy. This leads to four distinct forms of fault tolerance and to two main phases in achieving them: detection and correction. We show that this can help to reveal inherently fundamental structures that contribute to understanding and unifying methods and terminology. By doing this, we survey many existing methodologies and discuss their relations. The underlying system model is the close-to-reality asynchronous message-passing model of distributed computing.
Extreme Learning Classifier with Deep Concepts.
Mobile Robot Control Using a Cloud of Particles. Common control systems for mobile robots include the use of deterministic control laws together with state estimation approaches and the consideration of the certainty equivalence principle. Recent approaches consider the use of partially observable Markov decision process strategies together with Bayesian estimators. In order to reduce the required processing power and yet allow for multimodal or non-Gaussian distributions, a scheme based on a particle filter and a corresponding cloud of input signals is proposed in this paper. Results are presented and compared to a scheme with extended Kalman filter and the assumption that the certainty equivalence holds.
1.031855
0.03229
0.031329
0.028571
0.006787
0.000441
0.000042
0.000017
0.000006
0.000001
0
0
0
0
VEWS: A Wikipedia Vandal Early Warning System We study the problem of detecting vandals on Wikipedia before any human or known vandalism detection system reports flagging potential vandals so that such users can be presented early to Wikipedia administrators. We leverage multiple classical ML approaches, but develop 3 novel sets of features. Our Wikipedia Vandal Behavior (WVB) approach uses a novel set of user editing patterns as features to classify some users as vandals. Our Wikipedia Transition Probability Matrix (WTPM) approach uses a set of features derived from a transition probability matrix and then reduces it via a neural net auto-encoder to classify some users as vandals. The VEWS approach merges the previous two approaches. Without using any information (e.g. reverts) provided by other users, these algorithms each have over 85% classification accuracy. Moreover, when temporal recency is considered, accuracy goes to almost 90%. We carry out detailed experiments on a new data set we have created consisting of about 33K Wikipedia users (including both a black list and a white list of editors) and containing 770K edits. We describe specific behaviors that distinguish between vandals and non-vandals. We show that VEWS beats ClueBot NG and STiki, the best known algorithms today for vandalism detection. Moreover, VEWS detects far more vandals than ClueBot NG and on average, detects them 2.39 edits before ClueBot NG when both detect the vandal. However, we show that the combination of VEWS and ClueBot NG can give a fully automated vandal early warning system with even higher accuracy.
Glove: Global Vectors for Word Representation.
Learning Deep Architectures for AI Theoretical results suggest that in order to learn the kind of com-plicated functions that can represent high-level abstractions (e.g., invision, language, and other AI-level tasks), one may needdeep architec-tures. Deep architectures are composed of multiple levels of non-linearoperations, such as in neural nets with many hidden layers or in com-plicated propositional formulae re-using many sub-formulae. Searchingthe parameter space of deep architectures is a difficult task, but learningalgorithms such as those for Deep Belief Networks have recently beenproposed to tackle this problem with notable success, beating the state-of-the-art in certain areas. This monograph discusses the motivationsand principles regarding learning algorithms for deep architectures, inparticular those exploiting as building blocks unsupervised learning ofsingle-layer models such as Restricted Boltzmann Machines, used toconstruct deeper models such as Deep Belief Networks.
Extended stable semantics for normal and disjunctive programs
The nature of statistical learning theory~. First Page of the Article
A machine program for theorem-proving The programming of a proof procedure is discussed in connection with trial runs and possible improvements.
An Introduction to Least Commitment Planning Recent developments have clarified the process of generating partially ordered, partially specified sequences of actions whose execution will achieve an agent's goal. This article summarizes a progression of least commitment planners, starting with one that handles the simple STRIPS representation and ending with UCOPOP a planner that manages actions with disjunctive precondition, conditional effects, and universal quantification over dynamic universes. Along the way, I explain how Chapman's formulation of the modal truth criterion is misleading and why his NP-completeness result for reasoning about plans with conditional effects does not apply to UCOPOP.
Equilibria and steering laws for planar formations This paper presents a Lie group setting for the problem of control of formations, as a natural outcome of the analysis of a planar two-vehicle formation control law. The vehicle trajectories are described using the planar Frenet–Serret equations of motion, which capture the evolution of both the vehicle position and orientation for unit-speed motion subject to curvature (steering) control. The set of all possible (relative) equilibria for arbitrary G-invariant curvature controls is described (where G=SE(2) is a symmetry group for the control law), and a global convergence result for the two-vehicle control law is proved. An n-vehicle generalization of the two-vehicle control law is also presented, and the corresponding (relative) equilibria for the n-vehicle problem are characterized. Work is on-going to discover stability and convergence results for the n-vehicle problem.
Pushing the envelope: planning, propositional logic, and stochastic search Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning problems many times faster than the best current planning systems. Although stochastic methods have been shown to be very effective on a wide range of scheduling problems, this is the first demonstration of its power on truly challenging classical planning instances. This work also provides a new perspective on representational issues in planning.
Simultaneous Localization And Mapping With Sparse Extended Information Filters In this paper we describe a scalable algorithm for the simultaneous mapping and localization (SLAM) problem. SLAM is the problem of acquiring a map of a static environment with a mobile robot. The vast majority of SLAM algorithms are based on the extended Kahnan filter (EKF), In this paper we advocate an algorithm that relies on the dual of the EKE the extended information filter (EIF). We show that when represented in the information form, map posteriors are dominated by a small number of links that tie together nearby features in the map. This insight is developed into a sparse variant of the EIF called the sparse extended information filter (SEIF). SEIFs represent maps by graphical networks of features that are locally interconnected, where links represent relative information between pairs of nearby,features, as well as information about the robot's pose relative to the map. We show that all essential update equations in SEIFs can be executed in constant time, irrespective of the size of the map. We also provide empirical results obtained for a benchmark data set collected in an outdoor environment, and using a multi-robot mapping simulation.
A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system.
iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings.
When Multivariate Forecasting Meets Unsupervised Feature Learning - Towards a Novel Anomaly Detection Framework for Decision Support. Many organizations adopt information technologies to make intelligent decisions during operations. Time-series data plays a crucial role in supporting such decision making processes. Though current studies on time-series based decision making provide reasonably well results, the anomaly detection essence underling most of the scenarios and the plenitude of unlabeled data are largely overlooked and left unexplored. We argue that by using multivariate forecasting and unsupervised feature learning, these two important research gaps could be filled. We carried out two experiments in this study to testify our approach and the results showed that decision support performance was significantly improved. We also proposed a novel framework to integrate the two methods so that our approach may be generalized to a larger problem domain. We discussed the advantages, the limitations and the future work of our study. Both practical and theoretical contributions were also discussed in the paper. © 2012 by the AIS/ICIS Administrative Office All rights reserved.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.2
0.007407
0.000098
0
0
0
0
0
0
0
0
0
0
0
Unsupervised Visual Attribute Transfer with Reconfigurable Generative Adversarial Networks. Learning to transfer visual attributes requires supervision dataset. Corresponding images with varying attribute values with the same identity are required for learning the transfer function. This largely limits their applications, because capturing them is often a difficult task. To address the issue, we propose an unsupervised method to learn to transfer visual attribute. The proposed method can learn the transfer function without any corresponding images. Inspecting visualization results from various unsupervised attribute transfer tasks, we verify the effectiveness of the proposed method.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A State-Based Intention Driven Declarative Process Model Declarative process models support process flexibility, whose importance has been widely recognized, particularly for organizations that face frequent changes and variable stimuli from their environment. However, the currently dominant declarative approaches lack expressiveness for addressing the process context namely, environment effects and leading its execution towards a goal. This paper proposes a declarative model which addresses activities as well as states, external events, and goals. The model is based on the Generic Process Model GPM, extended by a notion of activity, which includes a state change aspect and an intentional aspect. The achievement of the intention of an activity may depend on events in the environment and is hence not certain. The paper provides a formalization of the model and describes an execution mechanism. It emphasizes the usefulness of specifying the intentional aspect of activities, by using it as a basis for semantic validation of the model at design time and for a planning module that can guide execution at runtime. These are illustrated by an example from the medical domain.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
RAID5 performance with distributed sparing Distributed sparing is a method to improve the performance of RAID5 disk arrays with respect to a dedicated sparing system with N + 2 disks (including the spare disk), since it utilizes the bandwidth of all N + 2 disks. We analyze the performance of RAID5 with distributed sparing in normal mode, degraded mode, and rebuild mode in an OLTP environment, which implies small reads and writes. The analysis in normal mode uses an M/G/1 queuing model, which takes into account the components of disk service time. In degraded mode, a low-cost approximate method is developed to estimate the mean response time of fork-join requests resulting from accesses to recreate lost data on the failed disk. Rebuild mode performance is analyzed by considering an M/G/1 vacationing server model with multiple vacations of different types to take into account differences in processing requirements for reading the first and subsequent tracks. An iterative solution method is used to estimate the mean response time of disk requests, as well as the time to read each disk, which is shown to be quite accurate through validation against simulation results. We next compare RAID5 performance in a system 1) without a cache; 2) with a cache; and 3) with a nonvolatile storage (NVS) cache. The last configuration, in addition to improved read response time due to cache hits, provides a fast-write capability, such that dirty blocks can be destaged asynchronously and at a lower priority than read requests, resulting in an improvement in read response time. The small write penalty is also reduced due to the possibility of repeated writes to dirty blocks in the cache and by taking advantage of disk geometry to efficiently destage multiple blocks at a time.
On Variable Scope of Parity Protection in Disk Arrays In a common form of a RAID 5 architecture, data is organized on a disk array consisting of N + 1 disks into stripes of N data blocks and one parity block (with parity block locations staggered so as to balance the number of parity blocks on each disk). This allows data to be recovered in the event of a single disk failure. Here we consider an extension to this architecture in which parity information applies to arbitrary subsets of the data blocks in each stripe. Using several simplifying assumptions, we present simulation and analytic results that provide estimates of the improvement using this approach, in terms of total I/O operations, as compared to 1) conventional RAID 5 under a random single-block write workload, and 2) the use of a log-structured file system in which data is written out in stripes. Results on the reduction of disk recovery costs are also presented.
Self-adaptive Disk Arrays We present a disk array organization that adapts itself to successive disk failures. When all disks are operational, all data are replicated on two disks. Whenever a disk fails, the array reorganizes itself, by selecting a disk containing redundant data and replacing these data by their exclusive or (XOR) with the other copy of the data contained on the disk that failed. This will protect the array against any single disk failure until the failed disk gets replaced and the array can revert to its original condition. Hence data will remain protected against the successive failures of up to one half of the original number of disks, provided that no critical disk failure happens while the array is reorganizing itself. As a result, our scheme achieves the same access times as a replicated organization under normal operational conditions while having a much lower likelihood of loosing data under abnormal conditions. In addition it tolerates much longer repair times than static disk arrays/
Uniform parity group distribution in disk arrays with multiple failures Several new disk arrays have recently been proposed in which the parity groupings are uniformly distributed throughout the array so that the extra workload created by a disk failure can be evenly shared by all the surviving disks, resulting in the best possible degraded mode performance. Many arrays now also put in multiple spare disks so that expensive service calls can be deferred. Furthermore, in a new sparing scheme called distributed sparing, the spare spaces are actually distributed throughout the array. This means after a rebuild the new array will be logically different from the original array. The authors present an algorithm for constructing and maintaining arrays with distributed sparing so that repeated uniform parity group distribution is achieved with each successive failure.
Understanding disk failure rates: What does an MTTF of 1,000,000 hours mean to you? Component failure in large-scale IT installations is becoming an ever-larger problem as the number of components in a single cluster approaches a million. This article is an extension of our previous study on disk failures [Schroeder and Gibson 2007] and presents and analyzes field-gathered disk replacement data from a number of large production systems, including high-performance computing sites and internet services sites. More than 110,000 disks are covered by this data, some for an entire lifetime of five years. The data includes drives with SCSI and FC, as well as SATA interfaces. The mean time-to-failure (MTTF) of those drives, as specified in their datasheets, ranges from 1,000,000 to 1,500,000 hours, suggesting a nominal annual failure rate of at most 0.88&percnt;. We find that in the field, annual disk replacement rates typically exceed 1&percnt;, with 2--4&percnt; common and up to 13&percnt; observed on some systems. This suggests that field replacement is a fairly different process than one might predict based on datasheet MTTF. We also find evidence, based on records of disk replacements in the field, that failure rate is not constant with age, and that rather than a significant infant mortality effect, we see a significant early onset of wear-out degradation. In other words, the replacement rates in our data grew constantly with age, an effect often assumed not to set in until after a nominal lifetime of 5 years. Interestingly, we observe little difference in replacement rates between SCSI, FC, and SATA drives, potentially an indication that disk-independent factors such as operating conditions affect replacement rates more than component-specific ones. On the other hand, we see only one instance of a customer rejecting an entire population of disks as a bad batch, in this case because of media error rates, and this instance involved SATA disks. Time between replacement, a proxy for time between failure, is not well modeled by an exponential distribution and exhibits significant levels of correlation, including autocorrelation and long-range dependence.
Higher reliability redundant disk arrays: Organization, operation, and coding Parity is a popular form of data protection in redundant arrays of inexpensive/independent disks (RAID). RAID5 dedicates one out of N disks to parity to mask single disk failures, that is, the contents of a block on a failed disk can be reconstructed by exclusive-ORing the corresponding blocks on surviving disks. RAID5 can mask a single disk failure, and it is vulnerable to data loss if a second disk failure occurs. The RAID5 rebuild process systematically reconstructs the contents of a failed disk on a spare disk, returning the system to its original state, but the rebuild process may be unsuccessful due to unreadable sectors. This has led to two disk failure tolerant arrays (2DFTs), such as RAID6 based on Reed-Solomon (RS) codes. EVENODD, RDP (Row-Diagonal-Parity), the X-code, and RM2 (Row-Matrix) are 2DFTs with parity coding. RM2 incurs a higher level of redundancy than two disks, while the X-code is limited to a prime number of disks. RDP is optimal with respect to the number of XOR operations at the encoding, but not for short write operations. For small symbol sizes EVENODD and RDP have the same disk access pattern as RAID6, while RM2 and the X-code incur a high recovery cost with two failed disks. We describe variations to RAID5 and RAID6 organizations, including clustered RAID, different methods to update parities, rebuild processing, disk scrubbing to eliminate sector errors, and the intra-disk redundancy (IDR) method to deal with sector errors. We summarize the results of recent studies of failures in hard disk drives. We describe Markov chain reliability models to estimate RAID mean time to data loss (MTTDL) taking into account sector errors and the effect of disk scrubbing. Numerical results show that RAID5 plus IDR attains the same MTTDL level as RAID6, while incurring a lower performance penalty. We conclude with a survey of analytic and simulation studies of RAID performance and tools and benchmarks for RAID performance evaluation.
Parity logging disk arrays Parity-encoded redundant disk arrays provide highly reliable, cost-effective secondary storage with high performance for reads and large writes. Their performance on small writes, however, is much worse than mirrored disks—the traditional, highly reliable, but expensive organization for secondary storage. Unfortunately, small writes are a substantial portion of the I/O workload of many important, demanding applications such as on-line transaction processing. This paper presents parity logging, a novel solution to the small-write problem for redundant disk arrays. Parity logging applies journalling techniques to reduce substantially the cost of small writes. We provide detailed models of parity logging and competing schemes—mirroring, floating storage, and RAID level 5—and verify these models by simulation. Parity logging provides performance competitive with mirroring, but with capacity overhead close to the minimum offered by RAID level 5. Finally, parity logging can exploit data caching more effectively than all three alternative approaches.
An analysis of data corruption in the storage stack An important threat to reliable storage of data is silent data corruption. In order to develop suitable protection mechanisms against data corruption, it is essential to understand its characteristics. In this paper, we present the first large-scale study of data corruption. We analyze corruption instances recorded in production storage systems containing a total of 1.53 million disk drives, over a period of 41 months. We study three classes of corruption: checksum mismatches, identity discrepancies, and parity inconsistencies. We focus on checksum mismatches since they occur the most. We find more than 400,000 instances of checksum mismatches over the 41-month period. We find many interesting trends among these instances including: (i) nearline disks (and their adapters) develop checksum mismatches an order of magnitude more often than enterprise class disk drives, (ii) checksum mismatches within the same disk are not independent events and they show high spatial and temporal locality, and (iii) checksum mismatches across different disks in the same storage system are not independent. We use our observations to derive lessons for corruption-proof system design.
Understanding latent sector errors and how to protect against them Latent sector errors (LSEs) refer to the situation where particular sectors on a drive become inaccessible. LSEs are a critical factor in data reliability, since a single LSE can lead to data loss when encountered during RAID reconstruction after a disk failure or in systems without redundancy. LSEs happen at a significant rate in the field [Bairavasundaram et al. 2007], and are expected to grow more frequent with new drive technologies and increasing drive capacities. While two approaches, data scrubbing and intra-disk redundancy, have been proposed to reduce data loss due to LSEs, none of these approaches has been evaluated on real field data. This article makes two contributions. We provide an extended statistical analysis of latent sector errors in the field, specifically from the view point of how to protect against LSEs. In addition to providing interesting insights into LSEs, we hope the results (including parameters for models we fit to the data) will help researchers and practitioners without access to data in driving their simulations or analysis of LSEs. Our second contribution is an evaluation of five different scrubbing policies and five different intra-disk redundancy schemes and their potential in protecting against LSEs. Our study includes schemes and policies that have been suggested before, but have never been evaluated on field data, as well as new policies that we propose based on our analysis of LSEs in the field.
Detection and exploitation of file working sets The work habits of most individuals yield file access patterns that are quite pronounced and can be regarded as defining working sets of files used for particular applications. This paper describes a client-side cache management technique for detecting these patterns and then exploiting them to successfully prefetch files from servers. Trace-driven simulations show the technique substantially increases the hit rate of a client file cache in an environment in which a client workstation is dedicated to a single user. Successful file prefetching carries three major advantages: (1) ap- plications run faster, (2) there is less ''burst'' load placed on the network, and (3) properly-loaded client caches can better survive network outages. Our technique re- quires little extra code, and — because it is simply an augmentation of the standard LRU client cache management algorithm — is easily incorporated into existing software.
A feedback-driven proportion allocator for real-rate scheduling In this paper we propose changing the decades-old practice of allocating CPU to threads based on priority to a scheme based on proportion and period. Our scheme allocates to each thread a percentage of CPU cycles over a period of time, and uses a feedback-based adaptive scheduler to assign automatically both proportion and period. Applications with known requirements, such as isochronous software devices, can bypass the adaptive scheduler by specifying their desired proportion and/or period. As a result, our scheme provides reservations to applications that need them, and the benefits of proportion. and period to chose that do not. Adaptive scheduling using proportion and period has several distinct benefits over either fixed or adaptive priority based schemes: finer grain control of allocation, lower variance in the amount of cycles allocated to a thread, and avoidance of accidental priority inversion and starvation, including defense against denial-of-service attacks. This paper describes our design of an adaptive controller and proportion-period scheduler its implementation in Linux, and presents experimental validation of our approach.
Providing user support for interactive applications with FUSE FUSE (Formal User Interface Specification Environment) is an integrated user interface development environment that offers tool-based support for all phases of the interface design process. PLUG-IN forms one part of FUSE. Its purpose is to provide support for the end-user working with user interfaces generated by FUSE. PLUG-IN produces dynamic on-line help pages and animation sequences on the fly. On the dynamic help pages textual help for the user is displayed whereas the animation sequences are used to show how the user can interact with the application. In the presentation the architecture of FUSE is discussed. Furthermore PLUG-IN’s user guidance capabilities are demonstrated by looking at the user interface of an interactive ISDN telephone simulation.
When cryptography meets storage Confidential data storage through encryption is becoming increasingly important. Designers and implementers of encryption methods of storage media must be aware that storage has different usage patterns and properties compared to securing other information media such as networks. In this paper, we empirically demonstrate two-time pad vulnerabilities in storage that are exposed via shifting file contents, in-place file updates, storage mechanisms hidden by layers of abstractions, inconsistencies between memory and disk content, and backups. We also demonstrate how a simple application of Bloom filters can automatically extract plaintexts from two-time pads. Further, our experience sheds light on system research directions to better support cryptographic assumptions and guarantees.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.01814
0.01755
0.011728
0.009044
0.00631
0.004386
0.002339
0.000831
0.000104
0.000011
0
0
0
0
Distributed Representations for Compositional Semantics. The mathematical representation of semantics is a key issue for Natural Language Processing (NLP). A lot of research has been devoted to finding ways of representing the semantics of individual words in vector spaces. Distributional approaches --- meaning distributed representations that exploit co-occurrence statistics of large corpora --- have proved popular and successful across a number of tasks. However, natural language usually comes in structures beyond the word level, with meaning arising not only from the individual words but also the structure they are contained in at the phrasal or sentential level. Modelling the compositional process by which the meaning of an utterance arises from the meaning of its parts is an equally fundamental task of NLP. This dissertation explores methods for learning distributed semantic representations and models for composing these into representations for larger linguistic units. Our underlying hypothesis is that neural models are a suitable vehicle for learning semantically rich representations and that such representations in turn are suitable vehicles for solving important tasks in natural language processing. The contribution of this thesis is a thorough evaluation of our hypothesis, as part of which we introduce several new approaches to representation learning and compositional semantics, as well as multiple state-of-the-art models which apply distributed semantic representations to various tasks in NLP.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Storage systems for movies-on-demand video servers We evaluate storage system alternatives for movies-on-demand video servers. We begin by characterizing the movies-on-demand workload. We briefly discuss performance in disk arrays. First, we study disk farms in which one movie is stored per disk. This is a simple scheme, but it wastes substantial disk bandwidth, because disks holding less popular movies are underutilized; also, good performance requires that movies be replicated to reflect the user request pattern. Next, we examine disk farms in which movies are striped across disks, and find that striped video servers offer nearly full utilization of the disks by achieving better load balancing. For the remainder of the paper, we concentrate on tertiary storage systems. We evaluate the use of storage hierarchies for video service. These hierarchies include a tertiary library along with a disk farm. We examine both magnetic tape libraries and optical disk jukeboxes. We show that, unfortunately, the performance of neither tertiary system performs adequately as part of a storage hierarchy to service the predicted distribution of movie accesses. We suggest changes to tertiary libraries that would make them better-suited to these applications.
Performance optimization for parallel tape arrays Abstract: With the advent of multimedia computing, the demand forvery-large-scale storage systems becomes ever more imminent.Tertiary memory systems, once considered exotic devicesequipped only with high-end computer systems, arenow gradually moving into the main stream. Although helicalscantape offers an economically feasible solution to the mediacost problem for storing petabytes worth of data, the associateddrives usually exhibit relatively poor performanceand reliability characteristics....
Striping in large tape libraries
Striped Tape Arrays A growing number of applications require high capacity, high throughput tertiary storage systems. We are investigating how data striping ideas apply to arrays of magnetic tape drives. Data striping increases throughput and reduces response time for large accesses to a storage system. Striped magnetic tape systems are particularly appealing because many inexpensive magnetic tape drives have low bandwidth; striping may offer dramatic performance improvements for these systems. There are several important issues in designing striped tape systems: the choice of tape drives and robots, whether to stripe within or between robots, and the choice of the best scheme for distributing data on cartridges. One of the most troublesome problems in striped tape arrays is the synchronization of transfers across tape drives. Another issue is how improved devices will affect the desirability of striping in the future. We present the results of simulations comparing the performance of striped tape systems to non-striped systems.
Ursa minor: versatile cluster-based storage No single encoding scheme or fault model is optimal for all data. A versatile storage system allows them to be matched to access patterns, reliability requirements, and cost goals on a per-data item basis. Ursa Minor is a cluster-based storage system that allows data-specific selection of, and on-line changes to, encoding schemes and fault models. Thus, different data types can share a scalable storage infrastructure and still enjoy specialized choices, rather than suffering from "one size fits all." Experiments with Ursa Minor show performance benefits of 2-3× when using specialized choices as opposed to a single, more general, configuration. Experiments also show that a single cluster supporting multiple workloads simultaneously is much more efficient when the choices are specialized for each distribution rather than forced to use a "one size fits all" configuration. When using the specialized distributions, aggregate cluster throughput nearly doubled.
Influence of Adaptive Data Layouts on Performance in Dynamically Changing Storage Environments For most of today's IT environments, the tremendous need for storage capacity in combination with a required minimum I/O performance has become highly critical. In dynamically growing environments, a storage management solution's underlying data distribution scheme has great impact to the overall system I/O performance. The evaluation of a number of open system storage virtualization solutions and volume managers has shown that all of them lack the ability to automatically adapt to changing access patterns and storage infrastructures; many of them require an error prone manual re-layout of the data blocks, or rely on a very time consuming re-striping of all available data. This paper evaluates the performance of conventional data distribution approaches compared to the adaptive virtualization solution V:DRIVE in dynamically changing storage environments. Changes of the storage infrastructure are normally not considered in benchmark results, but can have a significant impact on storage performance. Using synthetic benchmarks, V:DRIVE is compared in such changing environments with the non-adaptive Linux Logical Volume Manager (LVM). The performance results of our tests clearly outline the necessity of adaptive data distribution schemes.
Transforming policies into mechanisms with infokernel We describe an evolutionary path that allows operating systems to be used in a more flexible and appropriate manner by higher-level services. An infokernel exposes key pieces of information about its algorithms and internal state; thus, its default policies become mechanisms, which can be controlled from user-level. We have implemented two prototype infokernels based on the linuxtwofour and netbsdver kernels, called infolinux and infobsd, respectively. The infokernels export key abstractions as well as basic information primitives. Using infolinux, we have implemented four case studies showing that policies within Linux can be manipulated outside of the kernel. Specifically, we show that the default file cache replacement algorithm, file layout policy, disk scheduling algorithm, and TCP congestion control algorithm can each be turned into base mechanisms. For each case study, we have found that infokernel abstractions can be implemented with little code and that the overhead and accuracy of synthesizing policies at user-level is acceptable.
Energy efficiency through burstiness OS resource management policies traditionally employ buffering to "smooth out" fluctuations in resource demand. By minimizing the length of idle periods and the level of contention during non-idle periods, such smoothing tends to maximize overall throughput and minimize the latency of individual requests. For certain important devices, how- ever (disks, network interfaces, or even computational ele- ments), smoothing eliminates opportunities to save energy using low-power modes. As devices with such modes pro- liferate, and as energy efficiency becomes an increasingly important design consideration, we argue that OS poli- cies should be redesigned to increase burstiness for energy- sensitive devices. We are currently experimenting with techniques to in- crease the disk access pattern burstiness of the Linux op- erating system. Our results indicate that the deliberate cre- ation of bursty activity can save up to 78.5% of the energy consumed by a Hitachi DK23DA disk (in comparison with current policies), while simultaneously decreasing the neg- ative impact of disk congestion and spin-up latency on ap- plication performance.
Second-Level Buffer Cache Management Abstract--Buffer caches are commonly used in servers to reduce the number of slow disk accesses or network messages. These buffer caches form a multilevel buffer cache hierarchy. In such a hierarchy, second-level buffer caches have different access patterns from first-level buffer caches because accesses to a second-level are actually misses from a first-level. Therefore, commonly used cache management algorithms such as the Least Recently Used (LRU) replacement algorithm that work well for single-level buffer caches may not work well for second-level. This paper investigates multiple approaches to effectively manage second-level buffer caches. In particular, it reports our research results in 1) second-level buffer cache access pattern characterization, 2) a new local algorithm called Multi-Queue (MQ) that performs better than nine tested alternative algorithms for second-level buffer caches, 3) a set of global algorithms that manage a multilevel buffer cache hierarchy globally and significantly improve second-level buffer cache hit ratios over corresponding local algorithms, and 4) implementation and evaluation of these algorithms in a real storage system connected with commercial database servers (Microsoft SQL Server and Oracle) running industrial-strength online transaction processing benchmarks.
A Case for Fault-Tolerant Memory for Transaction Processing
NP is as easy as detecting unique solutions For all known NP-complete problems the number of solutions in instances having solutions may vary over an exponentially large range. Furthermore, most of the well-known ones, such as satisfiability, are parsimoniously interreducible, and these can have any number of solutions between zero and an exponentially large number. It is natural to ask whether the inherent intractability of NP-complete problems is caused by this wide variation. In this paper we give a negative answer to this using randomized reductions. We show that the problems of distinguishing between instances of SAT having zero or one solution, or finding solutions to instances of SAT having unique solutions, are as hard as SAT itself. Several corollaries about the difficulty of specific problems follow. For example if the parity of the number of solutions of SAT can be computed in RP then NP = RP. Some further problems can be shown to be hard for NP or DP via randomized reductions.
A survey of computational complexity results in systems and control The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control theory, and (b) to survey the relatively recent research activity lying at the interface between these fields. We begin with a brief introduction to models of computation, the concepts of undecidability, polynomial-time algorithms, NP-completeness, and the implications of intractability results. We then survey a number of problems that arise in systems and control theory, some of them classical, some of them related to current research. We discuss them from the point of view of computational complexity and also point out many open problems. In particular, we consider problems related to stability or stabilizability of linear systems with parametric uncertainty, robust control, time-varying linear systems, nonlinear and hybrid systems, and stochastic optimal control.
NP-Completeness of Refutability by Literal-Once Resolution A boolean formula in conjunctive normal form (CNF) F is refuted by literal-once resolution if the empty clause is inferred from F by resolving on each literal of F at most once. Literal-once resolution refutations can be found nondeterministically in polynomial time, though this restricted system is not complete. We show that despite of the weakness of literal-once resolution, the recognition of CNF-formulas which are refutable by literal-once resolution is NP-complete. We study the relationship between literal-once resolution and read-once resolution (introduced by Iwama and Miyano). Further we answer a question posed by Kullmann related to minimal unsatisfiability.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.105114
0.1
0.035038
0.016667
0.001252
0.000083
0.000019
0.000008
0.000003
0
0
0
0
0
Optimizing acoustic feature extractor for anomalous sound detection based on Neyman-Pearson lemma. We propose a method for optimizing an acoustic feature extractor for anomalous sound detection (ASD). Most ASD systems adopt outlier-detection techniques because it is difficult to collect a massive amount of anomalous sound data. To improve the performance of such outlier-detection-based ASD, it is essential to extract a set of efficient acoustic features that is suitable for identifying anomalous sounds. However, the ideal property of a set of acoustic features that maximizes ASD performance has not been clarified. By considering outlier-detection-based ASD as a statistical hypothesis test, we defined optimality as an objective function that adopts Neyman-Pearson lemma; the acoustic feature extractor is optimized to extract a set of acoustic features which maximize the true positive rate under an arbitrary false positive rate. The variational auto-encoder is applied as an acoustic feature extractor and optimized to maximize the objective function. We confirmed that the proposed method improved the F-measure score from 0.02 to 0.06 points compared to those of conventional methods, and ASD results of a stereolithography 3D-printer in a real-environment show that the proposed method is effective in identifying anomalous sounds.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Incremental Test Case Generation for Distributed Object-Oriented Systems
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Formalism for Representing and Reasoning with Temporal Information, Event and Change In this paper we present a general formalism for representing and reasoning with temporal information, event and change. The temporal framework is a theory of time that takes both points and interval as temporal primitives and where the base logic is that of Kleene's three-valued logic. Thus, we can avoid the Divided Instant Problem (DIP). We present a three-valued based Temporal First-Order Nonmonotonic Logic (TFONL) that employs an explicit representation of time and events. We may embody default logic into TFONL, which takes into consideration the frame, qualification and ramification problems.
On integrating event definition and event detection We develop, in this paper, a representation of time and events that supports a range of reasoning tasks such as monitoring and detection of event patterns which may facilitate the explanation of root cause(s) of faults. We shall compare two approaches to event definition: the active database approach in which events are defined in terms of the conditions for their detection at an instant, and the knowledge representation approach in which events are defined in terms of the conditions for their occurrence over an interval. We shall show the shortcomings of the former definition and employ a three-valued temporal first order nonmonotonic logic, extended with events, in order to integrate both definitions.
Formalizing narratives using nested circumscription Abstract Representing and reasoning about narratives together with the ability to do hypothetical reasoning is important for agents in a dynamic,world. These agents need to record their observations and action executions as a narrative and at the same time, to achieve their goals against a changing environment, they need to make,plans (or re-plan) from the current situation. The early action formalisms did one or the other. For example, while the original situation calculus was meant for hypothetical reasoning and planning, the event calculus was more appropriate for narratives. Recently, there have been some attempts at developing formalisms that do both. Independently, there has also been a lot of recent research in reasoning about actions using circumscription. Of particular interest to us is the research on using high-level languages,and their logical representation using nested abnormality,theories (NATs)—a form,of circumscription with blocks that make,knowledge,representation modular. Starting from theories in the high-level languageL, which is extended to allow concurrent actions, we define a translation to NATs that preserves both narrative and hypothetical reasoning. We initially use the high level languageL, and then extend it to allow concurrent actions. In the process, we study several knowledge representation issues such as filtering, and restricted monotonicity with respect to NATs. Finally, we compare our formalization with other approaches, and discuss how our use of NATs makes it easier to incorporate other features of action theories, such as constraints, to our formalization. © 1998 Elsevier Science B.V. All rights reserved. Keywords: Narratives; Nested abnormality theories; Circumscription; Reasoning about actions; Value
A Transition Function Based Characterization of Actions with Delayed and Continuous Effects Abstract: In this paper we present a transitionfunction based characterization of actionsin a realistic environment. Ourlanguage allows for the specification ofactions with duration, continuous effects,delayed effects, dependency onnon-sharable resources, and accounts forparallel and overlapping execution of actions.
A logic-based calculus of events Formal Logic can be used to represent knowledge of many kinds for many purposes. It can be used to formalize programs, program specifications, databases, legislation, and natural language in general. For many such applications of logic a representation of time is necessary. Although there have been several attempts to formalize the notion of time in classical first-order logic, it is still widely believed that classical logic is not adequate for the representation of time and that some form of non-classical Temporal Logic is needed. In this paper, we shall outline a treatment of time, based on the notion of event, formalized in the Horn clause subset of classical logic augmented with negation as failure. The resulting formalization is executable as a logic program. We use the term ''event calculus'' to relate it to the well-known ''situation calculus'' (McCarthy and Hayes 1969). The main difference between the two is conceptual: the situation calculus deals with global states whereas the event calculus deals with local events and time periods. Like the event calculus, the situation calculus can be formalized by means of Horn clauses augmented with negation by failure (Kowalski 1979). The main intended applications investigated in this paper are the updating of data- bases and narrative understanding. In order to treat both cases uniformly we have taken the view that an update consists of the addition of new knowledge to a knowledge base. The effect of explicit deletion of information in conventional databases is obtained without deletion by adding new knowledge about the end of the period of time for which the information holds.
Action Languages Action languages are formal models of parts of the natural languagethat are used for talking about the effects of actions. This article is acollection of definitions related to action languages that may be usefulas a reference in future publications.1 IntroductionThis article is a collection of definitions related to action languages. Itdoes not provide a comprehensive discussion of the subject, and does notcontain a complete bibliography, but it may be useful as a reference in...
Formal Characterization of Active Databases . In this paper we take a first step towards characterizing activedatabases. Declarative characterization of active databases allowsadditional flexibility in studying the effects of different priority criteriabetween fireable rules, different actions and event definitions, andalso to make claims about effects of transaction and prove them withoutactually executing them. Our characterization is related but differentfrom similar attempts by Zaniolo in terms of making a clear distinction...
Acyclic programs We study here a natural subclass of the locally stratified programs which we call acyclic. Acyclic programs enjoy several natural properties. First, they terminate for a large and natural class of general goals, so they could be used as terminating PROLOG programs. Next, their semantics can be defined in several equivalent ways. In particular we show that the immediate consequence operator of an acyclic programP has a unique fixpointM p , which coincides with the perfect model ofP, is the unique Herbrand model of the completion ofP and can be identified with the unique fixpoint of the 3-valued immediate consequence operator associated withP. The completion of an acylic programP is shown to satisfy an even stronger property: addition of a domain closure axiom results in a theory which is complete and decidable with respect to a large class of formulas including the variable-free ones. This implies thatM p is recursive. On the procedural side we show that SLS-resolution and SLDNF-resolution for acyclic programs coincide, are effective, sound and (non-floundering) complete with respect to the declarative semantics. Finally, we show that various forms of temporal reasoning, as exemplified by the so-called Yale Shooting Problem, can be naturally described by means of acyclic programs.
The fast downward planning system Fast Downward is a classical planning system based on heuristic search. It can deal with general deterministic planning problems encoded in the propositional fragment of PDDL2.2, including advanced features like ADL conditions and effects and derived predicates (axioms). Like other well-known planners such as HSP and FF, Fast Downward is a progression planner, searching the space of world states of a planning task in the forward direction. However, unlike other PDDL planning systems, Fast Downward does not use the propositional PDDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multivalued planning tasks, which makes many of the implicit constraints of a propositional planning task explicit. Exploiting this alternative representation, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic function, called the causal graph heuristic, which is very different from traditional HSP-like heuristics based on ignoring negative interactions of operators. In this article, we give a full account of Fast Downward's approach to solving multivalued planning tasks. We extend our earlier discussion of the causal graph heuristic to tasks involving axioms and conditional effects and present some novel techniques for search control that are used within Fast Downward's best-first search algorithm: preferred operators transfer the idea of helpful actions from local search to global best-first search, deferred evaluation of heuristic functions mitigates the negative effect of large branching factors on search performance, and multiheuristic best-first search combines several heuristic evaluation functions within a single search algorithm in an orthogonal way. We also describe efficient data structures for fast state expansion (successor generators and axiom evaluators) and present a new non-heuristic search algorithm called focused iterative-broadening search, which utilizes the information encoded in causal graphs in a novel way. Fast Downward has proven remarkably successful: It won the "classical" (i. e., propositional, non-optimising) track of the 4th International Planning Competition at ICAPS 2004, following in the footsteps of planners such as FF and LPG. Our experiments show that it also performs very well on the benchmarks of the earlier planning competitions and provide some insights about the usefulness of the new search enhancements.
On the complexity of database queries We revisit the issue of the complexity of database queries, in the light of the recent parametric refinement of com- plexity theory. We show that, if the query size (or the number of variables in the query) is considered as a parameter, then the relational calculus and its frag- ments (conjunctive queries, positive queries) are classi- fied at appropriate levels of the so-called W hierarchy of Downey and Fellows. These results strongly suggest that the query size is inherently in the exponent of the data complexity of any query evaluation algorithm, with the implication becoming stronger as the expressibility of the query language increases. For recursive languages (fixpoint logic, Datalog) this is provably the case (14). On the positive side, we show that this exponential de- pendence can be avoided for the extension of acyclic queries with # (but not <) inequalities.
Embedding revision programs in logic programming situation calculus Revision programs were introduced by Marek and Truszczynski to specify a change in knowledge bases. In this paper, we show how to embed revision programs in logic programs with situation calculus notation. We extend Marek and Truszczynski's approach to allow an incomplete initial knowledge base, and extend the rules of revision programs to depend both on the initial and the final knowledge base. We show how revision programs and its proposed extension can be incorporated in theories of actions, and how our usage of situation calculus notation makes this easier and elegant.
A comparison of FFS disk allocation policies The 4.4BSD file system includes a new algorithm for allocating disk blocks to files. The goal of this algorithm is to improve file clustering, increasing the amount of sequential I/O when reading or writing files, thereby improving file system performance. In this paper we study the effectiveness of this algorithm at reducing file system fragmentation. We have created a program that artificially ages a file system by replaying a workload similar to that experienced by a real file system. We used this program to evaluate the effectiveness of the new disk allocation algorithm by replaying ten months of activity on two file systems that differed only in the disk allocation algorithms that they used. At the end of the ten month simulation, the file system using the new allocation algorithm had approximately half the fragmentation of a similarly aged file system that used the traditional disk allocation algorithm. Measuring the performance difference between the two file systems by reading and writing the same set of files on the two systems showed that this decrease in fragmentation improved file write throughput by 20% and read throughput by 32%. In certain test cases, the new allocation algorithm provided a performance improvement of greater than 50%.
Representing the Process Semantics in the Event Calculus In this paper we shall present a translation of the process semantics [5] to the event calculus. The aim is to realize a method of integrating high-level semantics with logical calculi to reason about continuous change. The general translation rules and the soundness and completeness theorem of the event calculus with respect to the process semantics are main technical results of this paper.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.128889
0.066667
0.027913
0.011111
0.004193
0.000139
0.000079
0.00001
0
0
0
0
0
0
Exploiting Structure in Policy Construction Markov decision processes (MDPs) have recently been applied to the problem of modeling decision-theoretic planning. While traditional methods for solving MDPs are often practical for small states spaces, their effectiveness for large AI planning problems is questionable. We present an algorithm, called structured policy Iteration (SPI), that constructs optimal policies without explicit enumeration of the state space. The algorithm retains the fundamental computational steps of the commonly used modified policy iteration algorithm, but exploits the variable and prepositional independencies reflected in a temporal Bayesian network representation of MDPs. The principles behind SPI can be applied to any structured representation of stochastic actions, policies and value functions, and the algorithm itself can be used in conjunction with recent approximation methods.
Model minimization, regression, and propositional STRIPS planning Propositional STRIPS planning problems can be viewed as finite state automata (FSAs) represented in a factored form. Automaton minimization is a well-known technique for reducing the size of an explicit FSA. Recent work in computer-aided verification on model checking has extended this technique to provide automaton minimization algorithms for factored FSAs. In this paper, we consider the relationship between STRIPS problem-solving techniques such as regression and the recently developed automaton minimization techniques for factored FSAs. We show that regression computes a partial and approximate minimized form of the FSA corresponding to the STRIPS problem. We then define a systematic form of regression which computes a partial but exact minimized form of the associated FSA. We also relate minimization to methods for performing reachability analysis to detect irrelevant fluents. Finally, we show that exact computation of the minimized automaton is NP-complete under the assumption that this automaton is polynomial in size.
The Complexity of Model Aggregation Wp show that the l)robhun of transforming a struc- tured Marker decision process (MDP) into Bounded Interval MDP is coNppr'-hacd. In particular, the test for e-homogvneitv, a ne('essarv part of verifying mlv ¯ . ¯ . . p'p ~ '" . . : prol)osed part.Ilion, ts coNP -complete. Tlus mall- catt,s thai., without furl,her assumptions on tile sorts uf partil.ioning allowrd or the structure of the original prt~positional MDP, this is not likely to be a prm:ticM approach. I, Vo also anMyze the coniplexity of finding tilt, ntinintal-size partition, and of the k-block parti- tion existence problem. Finally, we show that tile test fi)r homogeneity of an exact partition is complete for coN P('-P, which is the same class as coNP Pp. All of this mlalysis ,tpplies equally well to the process of p~trtitioning the state space via Structured Value Itoratitm.
Local Conditional High-Level Robot Programs When it comes to buildingrob ot controllers, high-level programming arises as a feasible alternative to planning. The task then is to verify a high-level program by finding a legal execution of it. However, interleaving offline verification with execution in the world seems to be the most practical approach for large programs and complex scenarios involvinginformation gatheringand exogenous events. In this paper, we present a mechanism for performing local lookahead for the Golog family of high-level robot programs. The main features of such mechanism are that it takes sensing seriously by constructing conditional plans that are ready to be executed in the world, and it mixes perfectly with an account of interleaved perception, planning, and action. Also, a simple implementation is developed.
The Complexity of Plan Existence and Evaluation in Probabilistic Domains We examine the computational complexity of testing and finding small plans inprobabilistic planning domains (both flat and succinct). We show that many problemsof interest are complete for a variety of complexity classes: PL, P, NP, co-NP, PP,NPPP, co-NPPP, and PSPACE. Of these, the probabilistic classes PP and NPPParelikely to be of special interest in the field of uncertainty in artificial intelligence andare deserving of additional study.1 IntroductionRecent work in ...
Complexity issues in Markov decision processes We survey the complexity of computational problemsabout Markov decision processes: evaluating policies,finding good and best policies, approximating bestpolicies, and related decision problems.1 IntroductionPartially-observable Markov decision processes(POMDPs) model sequential decision making whenoutcomes are uncertain and the state of the systemcannot be completely observed. They consist of decisionepochs, states, observations, actions, transitionprobabilities, and rewards. At...
Expressive equivalence of planning formalisms A concept of expressive equivalence for planning formalisms based on polynomialtransformations is defined. It is argued that this definition is reasonable and usefulboth from a theoretical and from a practical perspective; if two languages areequivalent, then theoretical results carry over and, more practically, we can modelan application problem in one language and then easily use a planner for the otherlanguage. In order to cope with the problem of exponentially sized solutions for...
Representing action: indeterminacy and ramifications We define and study a high-level language for describing actions, moreexpressive than the action language A introduced by Gelfond and Lifschitz.The new language, AR, allows us to describe actions withindirect effects (ramifications), nondeterministic actions, and actionsthat may be impossible to execute. It has symbols for nonpropositionalfluents and for the fluents that are exempt from the commonsense lawof inertia. Temporal projection problems specified using the languageAR can be...
Monotonic reductions, representative equivalence, and compilation of intractable problems The idea of preprocessing part of the input of a problem in order to improve efficiency has been employed by several researchers in several areas of computer science. In this article, we show sufficient conditions to prove that an intractable problem cannot be efficiently solved even allowing an exponentially long preprocessing phase. The generality of such conditions is shown by applying them to various problems coming from different fields. While the results may seem to discourage the use of compilation, we present some evidence that such negative results are useful in practice.
The Computational Complexity of Agent Design Problems This paper investigates the computational complexity of a fundamental problem in multi-agent systems: given an environment together with a specification of some task, can we construct an agent that will successfully achieve the task in the environment? We refer to this problem as agent design. Using an abstract formal model of agents and their environments, we begin by investigating various possible ways of specifying tasks for agents, and identify two important classes of such tasks. Achievement tasks are those in which an agent is required to bring about one of a specified set of goal states, and maintenance tasks are those in which an agent is required to avoid some specified set of states. We prove that in the most general case the agent design problem is PSPACE-complete for both achievement and maintenance tasks. We briefly discuss the automatic synthesis of agents from task environment specifications, and conclude by discussing related work and presenting some conclusions.
Automatic Polytime Reductions of NP Problems into a Fragment of STRIPS.
Compilability of Domain Descriptions in the Language A
MIND: A black-box energy consumption model for disk arrays Energy consumption is becoming a growing concern in data centers. Many energy-conservation techniques have been proposed to address this problem. However, an integrated method is still needed to evaluate energy efficiency of storage systems and various power conservation techniques. Extensive measurements of different workloads on storage systems are often very time-consuming and require expensive equipments. We have analyzed changing characteristics such as power and performance of stand-alone disks and RAID arrays, and then defined MIND as a black box power model for RAID arrays. MIND is devised to quantitatively measure the power consumption of redundant disk arrays running different workloads in a variety of execution modes. In MIND, we define five modes (idle, standby, and several types of access) and four actions, to precisely characterize power states and changes of RAID arrays. In addition, we develop corresponding metrics for each mode and action, and then integrate the model and a measurement algorithm into a popular trace tool - blktrace. With these features, we are able to run different IO traces on large-scale storage systems with power conservation techniques. Accurate energy consumption and performance statistics are then collected to evaluate energy efficiency of storage system designs and power conservation techniques. Our experiments running both synthetic and real-world workloads on enterprise RAID arrays show that MIND can estimate power consumptions of disk arrays with an error rate less than 2%.
Improving Citation Polarity Classification With Product Reviews Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification.
1.010468
0.011292
0.011233
0.008696
0.00579
0.004262
0.001992
0.000428
0.000071
0.000017
0.000001
0
0
0
Deep Learning in Microscopy Image Analysis: A Survey. Computerized microscopy image analysis plays an important role in computer aided diagnosis and prognosis. Machine learning techniques have powered many aspects of medical investigation and clinical practice. Recently, deep learning is emerging as a leading machine learning tool in computer vision and has attracted considerable attention in biomedical image analysis. In this paper, we provide a sna...
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Multi-Task Learning Framework for Emotion Recognition Using 2D Continuous Space. Dimensional models have been proposed in psychology studies to represent complex human emotional expressions. Activation and valence are two common dimensions in such models. They can be used to describe certain emotions. For example, anger is one type of emotion with a low valence and high activation value; neutral has both a medium level valence and activation value. In this work, we propose to apply multi-task learning to leverage activation and valence information for acoustic emotion recognition based on the deep belief network (DBN) framework. We treat the categorical emotion recognition task as the major task. For the secondary task, we leverage activation and valence labels in two different ways, category level based classification and continuous level based regression. The combination of the loss functions from the major and secondary tasks is used as the objective function in the multi-task learning framework. After iterative optimization, the values from the last hidden layer in the DBN are used as new features and fed into a support vector machine classifier for emotion recognition. Our experimental results on the Interactive Emotional Dyadic Motion Capture and Sustained Emotionally Colored Machine-Human Interaction Using Nonverbal Expression databases show significant improvements on unweighted accuracy, illustrating the benefit of utilizing additional information in a multi-task learning setup for emotion recognition.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Preliminary investigation of Boltzmann machine classifiers for speaker recognition.
Audio Chord Recognition with Recurrent Neural Networks.
Learning Semantic Representations for the Phrase Translation Model. This paper presents a novel semantic-based phrase translation model. A pair of source and target phrases are projected into continuous-valued vector representations in a low-dimensional latent semantic space, where their translation score is computed by the distance between the pair in this new space. The projection is performed by a multi-layer neural network whose weights are learned on parallel training data. The learning is aimed to directly optimize the quality of end-to-end machine translation results. Experimental evaluation has been performed on two Europarl translation tasks, English-French and German-English. The results show that the new semantic-based phrase translation model significantly improves the performance of a state-of-the-art phrase-based statistical machine translation sys-tem, leading to a gain of 0.7-1.0 BLEU points.
Analyzing Drum Patterns Using Conditional Deep Belief Networks.
Adaptive dropout for training deep neural networks.
Advances in optimizing recurrent networks After a more than decade-long period of relatively little research activity in the area of recurrent neural networks, several new developments will be reviewed here that have allowed substantial progress both in understanding and in technical solutions towards more efficient training of recurrent networks. These advances have been motivated by and related to the optimization issues surrounding deep learning. Although recurrent networks are extremely powerful in what they can in principle represent in terms of modeling sequences, their training is plagued by two aspects of the same issue regarding the learning of long-term dependencies. Experiments reported here evaluate the use of clipping gradients, spanning longer time ranges with leaky integration, advanced momentum techniques, using more powerful output probability models, and encouraging sparser gradients to help symmetry breaking and credit assignment. The experiments are performed on text and music data and show off the combined effects of these techniques in generally improving both training and test error.
Learning Features from Music Audio with Deep Belief Networks.
A fast learning algorithm for deep belief nets. We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.
Learning deep hierarchical visual feature coding. In this paper, we propose a hybrid architecture that combines the image modeling strengths of the bag of words framework with the representational power and adaptability of learning deep architectures. Local gradient-based descriptors, such as SIFT, are encoded via a hierarchical coding scheme composed of spatial aggregating restricted Boltzmann machines (RBM). For each coding layer, we regularize the RBM by encouraging representations to fit both sparse and selective distributions. Supervised fine-tuning is used to enhance the quality of the visual representation for the categorization task. We performed a thorough experimental evaluation using three image categorization data sets. The hierarchical coding scheme achieved competitive categorization accuracies of 79.7% and 86.4% on the Caltech-101 and 15-Scenes data sets, respectively. The visual representations learned are compact and the model's inference is fast, as compared with sparse coding methods. The low-level representations of descriptors that were learned using this method result in generic features that we empirically found to be transferrable between different image data sets. Further analysis reveal the significance of supervised fine-tuning when the architecture has two layers of representations as opposed to a single layer.
Shallow vs. Deep Sum-Product Networks. We investigate the representational power of sum-product networks (computation networks analogous to neural networks, but whose individual units compute either products or weighted sums), through a theoretical analysis that compares deep (multiple hidden layers) vs. shallow (one hidden layer) architectures. We prove there exist families of functions that can be represented much more efficiently with a deep network than with a shallow one, i.e. with substantially fewer hidden units. Such results were not available until now, and contribute to motivate recent research involving learning of deep sum-product networks, and more generally motivate research in Deep Learning.
The gamma database machine project ABSTRACT This paper describes the design of the Gamma,database machine,and the techniques employed,in its implementation. Gamma,is a relational database machine,currently operating on an Intel iPSC/2 hypercube,with 32 processors and 32 disk drives. Gamma,employs,three key technical ideas which enable the architecture to be scaled to 100s of processors. First, all relations are horizontally partitioned across multiple disk drives enabling relations to be scanned in parallel. Second, novel parallel algorithms based on hashing are used to implement the complex relational operators such as join and aggregate functions. Third, dataflow scheduling techniques are used to coordinate multioperator queries. By using these techniques it is possible to control the execution of very complex,queries with minimal coordination - a necessity for configurations involving a very large number,of processors. In addition to describing the design of the Gamma software, a thorough performance evaluation of the iPSC/2
Distributed Storage Codes With Repair-by-Transfer and Nonachievability of Interior Points on the Storage-Bandwidth Tradeoff Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of $k$ nodes within the $n$ -node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of $d$ nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when $d=n-1$ . This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as “helper node pooling,” and show that it is the necessity to satisfy such scenarios that overconstrains the system.
Efficient Verification of B-tree Integrity
A multiscale two-point flux-approximation method large number of multiscale finite-volume methods have been developed over the past decade to compute conservative approximations to multiphase flow problems in heterogeneous porous media. In particular, several iterative and algebraic multiscale frameworks that seek to reduce the fine-scale residual towards machine precision have been presented. Common for all such methods is that they rely on a compatible primal-dual coarse partition, which makes it challenging to extend them to stratigraphic and unstructured grids. Herein, we propose a general idea for how one can formulate multiscale finite-volume methods using only a primal coarse partition. To this end, we use two key ingredients that are computed numerically: (i) elementary functions that correspond to flow solutions used in transmissibility upscaling, and (ii) partition-of-unity functions used to combine elementary functions into basis functions. We exemplify the idea by deriving a multiscale two-point flux-approximation (MsTPFA) method, which is robust with regards to strong heterogeneities in the permeability field and can easily handle general grids with unstructured fine- and coarse-scale connections. The method can easily be adapted to arbitrary levels of coarsening, and can be used both as a standalone solver and as a preconditioner. Several numerical experiments are presented to demonstrate that the MsTPFA method can be used to solve elliptic pressure problems on a wide variety of geological models in a robust and efficient manner.
1.100151
0.100302
0.100302
0.050151
0.025076
0.015434
0.004811
0.000271
0.000037
0.000007
0
0
0
0
E-health monitoring system enhancement with Gaussian mixture model. In order to enhance the healthcare system, we have designed and developed a system prototype which remotely monitors patient's vital parameters by using mobile based android application. Proposed E-health care system collects patient's biological and personal information with the corresponding vital parameters and stores this Meta data information into the health care database servers. The distributed servers are connected with GSP system. So the extracted information from the server is directly feed to the doctor's mobile device as well as to the patient's mobile devices in a presentable format. This system also uses Frontline SMS as an SMS service which is used to send SMS to the doctor's mobile device automatically, when any one of the patient's vital parameter goes out of normal range. In this paper, we present the GMM (Gaussian mixture model) based on extracted features of the patient information and assign it to the specialized doctor. In this work, we have shown that by GMM based algorithm efficiently balances the patient load to the doctor. This novel approach enhances the E-health monitoring system for normal situations as well as in the case of Natural disaster. The proposed load balancing approach gives relief to the patient for unnecessary long delay to receive medical advice. The presented result in this work shown that, the doctors from all category and specialization are loaded rationally and uniformly. According to our knowledge GMM based approach is the new additional component to enhance the E-health care system.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Predicting the Price of Bitcoin Using Machine Learning The goal of this paper is to ascertain with what accuracy the direction of Bitcoin price in USD can be predicted. The price data is sourced from the Bitcoin Price Index. The task is achieved with varying degrees of success through the implementation of a Bayesian optimised recurrent neural network (RNN) and a Long Short Term Memory (LSTM) network. The LSTM achieves the highest classification accuracy of 52% and a RMSE of 8%. The popular ARIMA model for time series forecasting is implemented as a comparison to the deep learning models. As expected, the non-linear deep learning methods outperform the ARIMA forecast which performs poorly. Finally, both deep learning models are benchmarked on both a GPU and a CPU with the training time on the GPU outperforming the CPU implementation by 67.7%.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Process algebra approach to reasoing about concurrent actions A reasonable transition rule is proposod for synchronized actions and some equational properties of bisimilarity and weak bisimilarity in the process algebra for reasoning about concurrent, actions are presented.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Performance model-directed data sieving for high-performance I/O Many scientific computing applications and engineering simulations exhibit noncontiguous I/O access patterns. Data sieving is an important technique to improve the performance of noncontiguous I/O accesses by combining small and noncontiguous requests into a large and contiguous request. It has been proven effective even though more data are potentially accessed than demanded. In this study, we propose a new data sieving approach namely performance model-directed data sieving, or PMD data sieving in short. It improves the existing data sieving approach from two aspects: (1) dynamically determines when it is beneficial to perform data sieving; and (2) dynamically determines how to perform data sieving if beneficial. It improves the performance of the existing data sieving approach considerably and reduces the memory consumption as verified by both theoretical analysis and experimental results. Given the importance of supporting noncontiguous accesses effectively and reducing the memory pressure in a large-scale system, the proposed PMD data sieving approach in this research holds a great promise and will have an impact on high-performance I/O systems.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On the symmetric travelling salesman problem II: Lifting theorems and facets. Four lifting theorems are derived for the symmetric travelling salesman polytope. They provide constructions and state conditions under which a linear inequality which defines a facet of then-city travelling salesman polytope retains its facetial property for the (n + m)-city travelling salesman polytope, wherem = 1 is an arbitrary integer. In particular, they permit a proof that all subtour-elimination as well as comb inequalities define facets of the convex hull of tours of then-city travelling salesman problem, wheren is an arbitrary integer.
Facets of the knapsack polytope Abstract A necessary and sufficient condition is given for an inequality with coefficients 0 or 1 to define a facet of the knapsack polytope, i.e., of the convex hull of 0–1 points satisfying a given linear inequality. A sufficient condition is also established for a larger class of inequalities (with coefficients not restricted to 0 and 1) to define a facet for the same polytope, and a procedure is given for generating all facets in the above two classes. The procedure can be viewed as a way of generating cutting planes for 0–1 programs.
On linear characterizations of combinatorial optimization problems We show that there can be no computationally tractable description by linear inequalities of the polyhedron associated with any NP-complete combinatorial optimization problem unless NP = co-NP -- a very unlikely event. We also apply the ellipsoid method for linear programming to show that a combinatorial optimization problem is solvable in polynomial time if and only if it admits a small generator of violated inequalities.
New families of hypohamiltonian graphs We construct three new infinite families of hypohamiltonian graphs having respectively 3k+1 vertices (k=3), 3k vertices (k=5) and 5k vertices (k=4); in particular, we exhibit a hypohamiltonian graph of order 19 and a cubic hypohamiltonian graph of order 20, the existence of which was still in doubt. Using these families, we get a lower bound for the number of non-isomorphic hypohamiltonian graphs of order 3k and 5k. We also give an example of an infinite graph G having no two-way infinite hamiltonian path, but in which every vertex-deleted subgraph G - x has such a path.
Optimization Problems And The Polynomial Hierarchy It is demonstrated that such problems as the symmetric Travelling Salesman Problem, Chromatic Number Problem, Maximal Clique Problem and a Knapsack Packing Problem are in the Δ P 2 level of PH and no lower if ∑ P 1 ≠ Π P 1 , or NP≠co-NP. This shows that these problems cannot be solved by polynomial reductions that use only positive information from an NP oracle, if NP≠co-NP. It is then shown how to extend these results to prove that interesting problems are properly in Δ P, X i +1 for all X , k where ∑ P, X k ≠ Π P, X k in PH X .
The Three-Color and Two-Color TantrixTM Rotation Puzzle Problems Are NP-Complete Via Parsimonious Reductions Holzer and Holzer [M. Holzer, W. Holzer, Tantrix(TM) rotation puzzles are intractable, Discrete Applied Mathematics 144(3) (2004) 345-358] proved that the Tantrix(TM) rotation puzzle problem with four colors is NP-complete, and they showed that the infinite variant of this problem is undecidable. In this paper, we study the three-color and two-color Tantrix(TM) rotation puzzle problems (3-TRP and 2-TRP) and their variants. Restricting the number of allowed colors to three (respectively, to two) reduces the set of available Tantrix(TM) tiles from 56 to 14 (respectively, to 8). We prove that 3-TRP and 2-TRP are NP-complete, which answers a question raised by Holzer and Holzer [M. Holzer, W. Holzer, Tantrix(TM) rotation puzzles are intractable, Discrete Applied Mathematics 144(3) (2004) 345-358] in the affirmative. Since our reductions are parsimonious, it follows that the problems Unique-3-TRP and Unique-2-TRP are DP-complete under randomized reductions. We also show that the another-solution problems associated with 4-TRP, 3-TRP, and 2-TRP are NP-complete. Finally, we prove that the infinite variants of 3-TRP and 2-TRP are undecidable.
Psychiatric Diagnosis from the Viewpoint of Computational Logic While medical information systems have become common in the United States, commercial systems that automate or assist in the process of medical diagnosis remain uncommon. This is not surprising, since automating diagnosis requires considerable sophistication both in the understanding of medical epidemeology and in knowledge representation techniques. This paper is an interdisciplinary study of how recent results in logic programming and non-monotonic reasoning can aid in psychiatric diagnosis. We argue that to logically represent psychiatric diagnosis as codified in the Diagnostic and Statistical Manual of Mental Disorders, 4th edition requires abduction over programs that include both explicit and non-stratified default negation, as well as dynamic rules that express preferences between conclusions. We show how such programs can be translated into abductive frameworks over normal logic programs and implemented using recently introduced logic programming techniques. Finally, we note how such programs are used in a commercial product Diagnostica.
From Disjunctive Programs to Abduction . The purpose of this work is to clarify the relationship betweenthree approaches to representing incomplete information in logicprogramming. Classical negation and epistemic disjunction are used inthe first of these approaches, abductive logic programs with classicalnegation in the second, and a simpler form of abductive logic programming--- without classical negation --- in the third. In the literature, theseideas have been illustrated with examples related to properties of actions,and ...
Bounded queries, approximations, and the Boolean hierarchy This paper investigates nondeterministic bounded query classes in relation to the complexity of NP-hard approximation problems and the Boolean Hierarchy. Nondeterministic bounded query classes turn out to be rather suitable for describing the complexity of NP-hard approximation problems. The results in this paper take advantage of this machine-based.
Actions with Indirect Effects (Preliminary Report)
Symbolic Decision Procedures for QBF Much recent work has gone into adapting techniques that were originally developed for SAT solving to QBF solving. In particular, QBF solvers are often based on SAT solvers. Most competitive QBF solvers are search-based. In this work we explore an alternative approach to QBF solving, based on symbolic quantifier elimination. We extend some recent symbolic approaches for SAT solving to symbolic QBF solving, using various decision-diagram formalisms such as OBDDs and ZDDs. In both approaches, QBF formulas are solved by eliminating all their quantifiers. Our first solver, QMRES, maintains a set of clauses represented by a ZDD and eliminates quantifiers via multi-resolution. Our second solver, QBDD, maintains a set of OBDDs, and eliminate quantifier by applying them to the underlying OBDDs. We compare our symbolic solvers to several competitive search-based solvers. We show that QBDD is not competitive, but QMRES compares favorably with search-based solvers on various benchmarks consisting of non-random formulas.
Enhancing disjunctive logic programming systems by SAT checkers Disjunctive logic programming (DLP) with stable model semantics is a powerful nonmonotonic formalism for knowledge representation and reasoning. Reasoning with DLP is harder than with normal (v-free) logic programs, because stable model checking--deciding whether a given model is a stable model of a propositional DLP program--is co-NP-complete, while it is polynomial for normal logic programs.This paper proposes a new transformation ΓM(P), which reduces stable model checking to UNSAT--i.e., to deciding whether a given CNF formula is unsatisfiable. The stability of a model M of a program P thus can be verified by calling a Satisfiability Checker on the CNF formula ΓM(P). The transformation is parsimonious (i.e., no new symbol is added), and efficiently computable, as it runs in logarithmic space (and therefore in polynomial time). Moreover, the size of the generated CNF formula never exceeds the size of the input (and is usually much smaller). We complement this transformation with modular evaluation results, which allow for efficient handling of large real-world reasoning problems.The proposed approach to stable model checking has been implemented in DLV--a state-of-the-art implementation of DLP. A number of experiments and benchmarks have been run using SATZ as Satisfiability checker. The results of the experiments are very positive and confirm the usefulness of our techniques.
VI-attached database storage This work presents a Vl-attached database storage architecture to improve database transaction rates. More specifically, we examine how Vl-based interconnects can be used to improve I/O path performance between a database server and a storage subsystem. To facilitate the interaction between client applications and a Vl-aware storage system, we design and implement a software layer called DSA, that is layered between applications and VI. DSA takes advantage of specific VI features and deals with many of its shortcomings. We provide and evaluate one kernel-level and two user-level implementations of DSA. These implementations trade transparency and generality for performance at different degrees and, unlike research prototypes, are designed to be suitable for real-world deployment. We have also investigated many design trade offs in the storage cluster. We present detailed measurements using a commercial database management system with both microbenchmarks and industrial database workloads on a mid-size, 4 CPU, and a large, 32 CPU, database server. We also compare the effectiveness of Vl-attached storage with an iSCSI configuration, and conclude that storage protocols implemented using DSA over VI have significant performance advantages. More generally, our results show that Vl-based interconnects and user-level communication can improve all aspects of the I/O path between the database system and the storage back-end. We also find that to make effective use of VI in I/O intensive environments, we need to provide substantial additional functionality than what is currently provided by VI. Finally, new storage APIs that help minimize kernel involvement in the I/O path are needed to fully exploit the benefits of Vl-based communication.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.110121
0.100194
0.100194
0.100194
0.050097
0.000476
0.000178
0.000061
0.000018
0.000001
0
0
0
0
Not So Easy Problems for Tree Decomposable Graphs We consider combinatorial problems that can be solved in polynomial time for graphs of bounded treewidth but where the order of the polynomial that bounds the running time is expected to depend on the treewidth bound. First we review some recent results for problems regarding list and equitable colorings, general factors, and generalized satisfiability. Second we establish a new hardness result for the problem of minimizing the maximum weighted outdegree for orientations of edge-weighted graphs of bounded treewidth.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Autoencoder As Assistant Supervisor: Improving Text Representation For Chinese Social Media Text Summarization Most of the current abstractive text summarization models are based on the sequence-to-sequence model (Seq2Seq). The source content of social media is long and noisy, so it is difficult for Seq2Seq to learn an accurate semantic representation. Compared with the source content, the annotated summary is short and well written. Moreover, it shares the same meaning as the source content. In this work, we supervise the learning of the representation of the source content with that of the summary. In implementation, we regard a summary autoencoder as an assistant supervisor of Seq2Seq. Following previous work, we evaluate our model on a popular Chinese social media dataset. Experimental results show that our model achieves the state-of-the-art performances on the benchmark dataset.(1)
Learning Deep Architectures for AI Theoretical results suggest that in order to learn the kind of com-plicated functions that can represent high-level abstractions (e.g., invision, language, and other AI-level tasks), one may needdeep architec-tures. Deep architectures are composed of multiple levels of non-linearoperations, such as in neural nets with many hidden layers or in com-plicated propositional formulae re-using many sub-formulae. Searchingthe parameter space of deep architectures is a difficult task, but learningalgorithms such as those for Deep Belief Networks have recently beenproposed to tackle this problem with notable success, beating the state-of-the-art in certain areas. This monograph discusses the motivationsand principles regarding learning algorithms for deep architectures, inparticular those exploiting as building blocks unsupervised learning ofsingle-layer models such as Restricted Boltzmann Machines, used toconstruct deeper models such as Deep Belief Networks.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Logic programs with classical negation
The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz.
Solving Advanced Reasoning Tasks Using Quantified Boolean Formulas We consider the compilation of different reasoning tasks into the evaluation problem of quantified boolean formulas (QBFs) as an approach to develop prototype reasoning sys- tems useful for, e.g., experimental purposes. Such a method is a natural generalization of a similar technique applied to NP-problems and has been recently proposed by other re- searchers. More specifically, we present translations of sev- eral well-known reasoning tasks from the area of nonmono- tonic reasoning into QBFs, and compare their implementa- tion in the prototype system QUIP with established NMR- provers. The results show reasonable performance, and docu- ment that the QBF approach is an attractive tool for rapid pro- totyping of experimental knowledge-representation systems.
Object Recognition from Local Scale-Invariant Features An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection.These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales.The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds.
Support-Vector Networks The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Improving the I/O Performance of Real-Time Database Systems with Multiple-Disk Storage Structures
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.2
0.000098
0
0
0
0
0
0
0
0
0
0
0
0
Fast planning through planning graph analysis We introduce a new approach to planning in STRIPS-like domains based on constructing and analyzing a compact structure we call a Planning Graph. We describe a new planner, Graphplan, that uses this paradigm. Graphplan always returns a shortest-possible partial-order plan, or states that no valid plan exists. We provide empirical evidence in favor of this approach, showing that Graphplan outperforms the total-order planner, Prodigy, and the partial-order planner, UCPOP, on a variety of interesting natural and artificial planning problems. We also give empirical evidence that the plans produced by Graphplan are quite sensible. Since searches made by this approach are fundamentally different from the searches of other common planning methods, they provide a new perspective on the planning problem.
Requirements for automated service composition Automated service composition is an important approach to create aggregate services out of existing services. Several different approaches towards automated service composition exist. They differ not only in the used algorithms but also in provided functionality. While some support the creation of compositions with alternative or parallel control flow, others are missing this functionality. This diversity yields from a missing consensus on the required functionality to automatically compose real-world services. Hence, with this paper we aim at providing the foundation for such a consensus. We derived the required functionality from multiple business scenarios set up in the Adaptive Services Grid (ASG) project.
Effect of knowledge representation on model based planning: experiments using logic programming encodings
Planning under continuous time and resource uncertainty: a challenge for AI We outline a class of problems, typical of Mars rover operations, that are problematic for current methods of planning under uncertainty. The existing methods fail because they suffer from one or more of the following limitations: 1) they rely on very simple models of actions and time, 2) they assume that uncertainty is manifested in discrete action outcomes, 3) they are only practical for very small problems. For many real world oroblems, these assumptions fail to hold. In particular, when planning the activities for a Mars rover, none of the above assumptions is valid: 1) actions can be concurrent and have differing durations, 2) there is uncertainty concerning action durations and consumption of continuous resources like power, and 3) typical daily plans involve on the order of a hundred actions. This class of problems may be of particular interest to the UAI community because both classical and decision-theoretic planning techniques may be useful in solving it. We describe the rover problem, discuss previous work on planning under uncertainty, and present a detailed, but very small, example illustrating some of the difficulties of finding good plans.
OBDD-based universal planning for synchronized agents in non-deterministic domains Recently model checking representation and search techniques were shown to be efficiently applicable to planning, in particular to non-deterministic planning. Such planning approaches use Ordered Binary Decision Diagrams (OBDDS) to encode a planning domain as a non-deterministic finite automaton and then apply fast algorithms from model checking to search for a solution. OBDDS can effectively scale and can provide universal plans for complex planning domains. We are particularly interested in addressing the complexities arising in non-deterministic, multi-agent domains. In this article, we present UMOP, a new universal OBDD-based planning framework for non-deterministic, multi-agent domains. We introduce a new planning domain description language, NADL, to specify non-deterministic, multi-agent domains. The language contributes the explicit definition of controllable agents and uncontrollable environment agents. We describe the syntax and semantics of NADL and show how to build an efficient OBDD-based representation of an NADL description. The UMOP planning system uses NADL and different OBDD-based universal planning algorithms. It includes the previously developed strong and strong cyclic planning algorithms. In addition, we introduce our new optimistic planning algorithm that relaxes optimality guarantees and generates plausible universal plans in some domains where no strong nor strong cyclic solution exists. We present empirical results applying UMOP to domains ranging from deterministic and single-agent with no environment actions to non-deterministic and multi-agent with complex environment actions. UMOP is shown to be a rich and efficient planning system.
Some Results on the Completeness of Approximation Based Reasoning We present two results that relate the completeness condi- tions for the 0-approximation for two formalisms: the action description language A and the situation calculus. The first result indicates that the completeness condition for the situa- tion calculus formalism implies the corresponding condition for the action language formalism. The second result indi- cates that an action theory in A can sometimes be simplified to an equivalent action theory whose completeness condition is weaker than the original theory for certain queries.
Planning Graphs and Knowledge Compilation. One of the major advances in classical planning has been the development of Graphplan. Graphplan builds a layered structure called the planning graph, and then searches this structure backwards for a plan. Modern SAT and CSP approaches also use the planning graph but replace the regression search by a constrained-directed search. The planning graph uncovers implicit constraints in the problem that reduce the size of the search tree. Such constraints encode lower bounds on the number of time steps required for achieving the goal and account for the huge performance gap between Graphplan and its predecessors. Still, the form of local consistency underlying the construction of the planning graph is not well understood, being described by various authors as a limited form of negative binary resolution, k-consistency, or 2-j consistency. In this paper, we aim to shed light on this issue by showing that the computation of the planning graph corresponds exactly to the iterative computation of prime implicates of size one and two over the logical encoding of the problem with the goals removed. The correspondence between planning graphs and a precise form of knowledge compilation provides a well-founded basis for understanding and developing extensions of the planning graph to non-Strips settings, and suggests novel and effective forms of knowledge compilation in other contexts. We explore some of these extensions in this paper and relate planning graphs with bounded variable elimination algorithms as studied by Rina Dechter and others.
Automatic OBDD-based generation of universal plans in non-deterministic domains Most real world environments are non-deterministic. Automatic plan formation in non-deterministic domains is, however, still an open problem. In this paper we present a practical algorithm for the automatic generation of solutions to planning problems in nondeterministic domains. Our approach has the followmg main features. First, the planner generates Universal Plans. Second, it generates plans which are guaranteed to achieve the goal in spite of non-determinism, if such plans exist. Otherwise, the planner generates plans which encode iterative trial-and-error strategies (e.g. try to pick up a block until succeed), which are guaranteed to achieve the goal under the assumption that if there is a non-deterministic possibility for the iteration to terminate, this will not be ignored forever. Third, the implementation of the planner is based on symbolic model checking techniques which have been designed to explore efficiently large state spaces. The implementation exploits the compactness of OBDDS (Ordered Binary Decision Diagrams) to express in a practical way universal plans of extremely large size.
The complexity of planning problems with simple causal graphs We present three new complexity results for classes of planning problems with simple causal graphs. First, we describe a polynomial-time algorithm that uses macros to generate plans for the class 3S of planning problems with binary state variables and acyclic causal graphs. This implies that plan generation may be tractable even when a planning problem has an exponentially long minimal solution. We also prove that the problem of plan existence for planning problems with multi-valued variables and chain causal graphs is NP-hard. Finally, we show that plan existence for planning problems with binary state variables and polytree causal graphs is NP-complete.
Support-Vector Networks The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Computing Stable Models with Quantified Boolean Formulas: Some Experimental Results Quantified boolean formulas (QBFs) are extensions of ordi- nary propositional formulas which admit efficient represen- tations of many important reasoning tasks. The existence of sophisticated QBF-solvers makes it possible to realize pro- totype systems for quite different knowledge-representation formalisms in a uniform manner. The system QUIP follows this idea and implements inference tasks from the area of non- monotonic reasoning by using suitable encodings to QBFs. In this paper, we report experimental results evaluating the per- formance of QUIP .I nparticular, we deal here with the dis- junctive logic programming module of QUIP, which will be the subject of two kinds of performance tests: First, we com- pare QUIP with the state-of-the-art logic programming sys- tems dlv and smodels, and second, we examine the per- formance of different QBF-solvers on the considered prob- lem classes. As benchmark philosophy we employ classes of disjunctive logic programs which are responsible for the - hardness of the given decision problems. The results show reasonable performance of the QBF approach and indicate possible improvements of QUIP by exploiting different QBF- solvers as underlying inference engines.
On the scale and performance of cooperative Web proxy caching Abstract While algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative- caching performance,in the large-scale World Wide Web en- vironment. This paper uses both trace-based analysis and analytic modelling,to show,the potential advantages and drawbacks of inter-proxy cooperation. With our traces, we evaluate quantitatively the performance-improvement po- tential of cooperation between 200 small-organization prox- ies within a university environment, and between two large- organization proxies handling 23,000 and 60,000 clients, re- spectively. With our model, we extend beyond these popula- tions to project cooperative caching behavior in regions with millions of clients. Overall, we demonstrate that cooperative caching has performance,benefits only within limited popu- lation bounds. We also use our model to examine the impli- cations of future trends in Web-access behavior and traffic.
A Framework for Distributed Object-Oriented Testing
Improving Citation Polarity Classification With Product Reviews Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification.
1.002218
0.00376
0.003709
0.002347
0.002008
0.001791
0.001438
0.000946
0.000552
0.000118
0.000007
0
0
0
Coding for High Availability of a Distributed-Parallel Storage System We have developed a distributed parallel storage system that employs the aggregate bandwidth of multiple data servers connected by a high-speed wide-area network to achieve scalability and high data throughput. This paper studies different schemes to enhance the reliability and availability of such network-based distributed storage systems. The general approach of this paper employs "erasure" error-correcting codes that can be used to reconstruct missing information caused by hardware, software, or human faults. The paper describes the approach and develops optimized algorithms for the encoding and decoding operations. Moreover, the paper presents techniques for reducing the communication and computation overhead incurred while reconstructing missing data from the redundant information. These techniques include clustering, multidimensional coding, and the full two-dimensional parity schemes. The paper considers trade-offs between redundancy, fault tolerance, and complexity of error recovery.
Network file storage with graceful performance degradation A file storage scheme is proposed for networks containing heterogeneous clients. In the scheme, the performance measured by file-retrieval delays degrades gracefully under increasingly serious faulty circumstances. The scheme combines coding with storage for better performance. The problem is NP-hard for general networks; and this article focuses on tree networks with asymmetric edges between adjacent nodes. A polynomial-time memory-allocation algorithm is presented, which determines how much data to store on each node, with the objective of minimizing the total amount of data stored in the network. Then a polynomial-time data-interleaving algorithm is used to determine which data to store on each node for satisfying the quality-of-service requirements in the scheme. By combining the memory-allocation algorithm with the data-interleaving algorithm, an optimal solution to realize the file storage scheme in tree networks is established.
Using a Gigabit Ethernet Cluster as a Distributed Disk Array with Multiple Fault Tolerance A cluster of PCs can be seen as a collection of networkedlow cost disks; such a collection can be operated by propersoftware so as to provide the abstraction of a single, largerblock device. By adding suitable data redundancy, such adisk collection as a whole could act as single, highly faulttolerant, distributed RAID device, providing capacity andreliability along with the convenient price/performance typicalof commodity clusters.We report about the design and performance of DRAID,a distributed RAID prototype running on a Gigabit Ethernetcluster of PCs. DRAID offers storage services undera Single I/O Space (SIOS) block device abstraction. TheSIOS feature implies that the storage space is accessible byeach of the stations in the cluster, rather than throughoutone or few end-points, with a potentially higher aggregateI/O bandwidh and better suitability to parallel I/O.
Distributed parallel data storage systems: a scalable approach to high speed image servers We have designed, built, and analyzed a distributed parallel storage system that will supply image streams fast enough to permit multi-user, “real-time”, video-like applications in a wide-area ATM network-based Internet environment. We have based the implementation on user-level code in order to secure portability; we have characterized the performance bottlenecks arising from operating system and hardware issues, and based on this have optimized our design to make the best use of the available performance. Although at this time we have only operated with a few classes of data, the approach appears to be capable of providing a scalable, high-performance, and economical mechanism to provide a data storage system for several classes of data (including mixed multimedia streams), and for applications (clients) that operate in a high-speed network environment.
A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.
Performance and Scalability of Client-Server Database Architectures Recent developments in software and hardware changed the way database systems are built and operate. In this paper we present database ar- chitectures based on the Client-Server paradigm and study their performance and scalability un- der different query/update workloads. The ar- chitectures are: Standard Client-Server, Client- Server with Multiple Disks, and Enhanced Client-Server. Data replication and client query result caching are used as the main mechanisms to improve the query throughput. The role of the server is to maintain system-wide data con- sistency and in the case of Enhanced Client- Server to selectively propagate updates on de- mand. Our study shows that except for the case of mostly update workloads, the Standard Client-Server architecture is outperformed by the other two architectures by one or more orders of magnitude. The Client-Server with Multiple Disks architecture offers performance compara- ble to that achieved by the Enhanced Client- Server for up to 100 clients, but the latter scales up a lot better for higher number of clients.
A probabilistic limit on the virtual size of replicated disk systems Recently, there has been considerable interest in parallel disk drive systems, in which full or partial replication of the stored data is used for both fault tolerance and enhanced performance. The performance-enhancement derives both from the ability to do parallel reads, and from the reduction of seek time which results from being able to assign a read to whichever drive will produce the shortest seek. Although earlier work implied that for a k-drive system, mean seek distance for read converges to 0 as k to alpha , a refined analysis is presented which shows that this limit is actually nonzero. It is further shown that the system behaves probabilistically as if k were small, no matter how large the physical value of k is.
Periodic retrieval of videos from disk arrays A growing number of applications need access to video data stored in digital form on secondary storage devices (e.g., video-on-demand, multimedia messaging). As a result, video servers that are responsible for the storage and retrieval, at fixed rates, of hundreds of videos from disks are becoming increasingly important. Since video data tends to be voluminous, several disks are usually used in order to store the videos. A challenge is to devise schemes for the storage and retrieval of videos that distribute the workload evenly across disks, reduce the cost of the server and at the same time, provide good response times to client requests for video data. In this paper, we present schemes that retrieve videos periodically from disks in order to provide better response times to client requests. We present two schemes that stripe videos across multiple disks in order to distribute the workload uniformly among them. For the two striping schemes, we show that the problem of retrieving videos periodically is equivalent to that of scheduling periodic tasks on a multiprocessor. For the multiprocessor scheduling problems, we present and compare schemes for computing start times for the tasks, if it is determined that they are scheduleable.
RACS: a case for cloud storage diversity The increasing popularity of cloud storage is leading organizations to consider moving data out of their own data centers and into the cloud. However, success for cloud storage providers can present a significant risk to customers; namely, it becomes very expensive to switch storage providers. In this paper, we make a case for applying RAID-like techniques used by disks and file systems, but at the cloud storage level. We argue that striping user data across multiple providers can allow customers to avoid vendor lock-in, reduce the cost of switching providers, and better tolerate provider outages or failures. We introduce RACS, a proxy that transparently spreads the storage load over many providers. We evaluate a prototype of our system and estimate the costs incurred and benefits reaped. Finally, we use trace-driven simulations to demonstrate how RACS can reduce the cost of switching storage vendors for a large organization such as the Internet Archive by seven-fold or more by varying erasure-coding parameters.
Sparse Feature Learning for Deep Belief Networks Unsupervised learning algorithms aim to discover the structure hidden in the data, and to learn representations that are more suitable as input to a supervised machine than the raw input. Many unsupervised methods are based on reconstructing the input from the representation, while constraining the repr esentation to have cer- tain desirable properties (e.g. low dimension, sparsity, e tc). Others are based on approximating density by stochastically reconstructing t he input from the repre- sentation. We describe a novel and efficient algorithm to lea rn sparse represen- tations, and compare it theoretically and experimentally with a similar machine trained probabilistically, namely a Restricted Boltzmann Machine. We propose a simple criterion to compare and select different unsupervised machines based on the trade-off between the reconstruction error and the info rmation content of the representation. We demonstrate this method by extracting features from a dataset of handwritten numerals, and from a dataset of natural image patches. We show that by stacking multiple levels of such machines and by training sequentially, high-order dependencies between the input observed variables can be captured.
Selective versioning in a secure disk system Making vital disk data recoverable even in the event of OS compromises has become a necessity, in view of the increased prevalence of OS vulnerability exploits over the recent years. We present the design and implementation of a secure disk system, SVSDS, that performs selective, flexible, and transparent versioning of stored data, at the disk-level. In addition to versioning, SVSDS actively enforces constraints to protect executables and system log files. Most existing versioning solutions that operate at the disk-level are unaware of the higher-level abstractions of data, and hence are not customizable. We evolve a hybrid solution that combines the advantages of disk-level and file-system--level versioning systems thereby ensuring security, while at the same time allowing flexible policies. We implemented and evaluated a software-level prototype of SVSDS in the Linux kernel and it shows that the space and performance overheads associated with selective versioning at the disk level are minimal.
Near-Optimal Parallel Prefetching and Caching Recently there has been a great deal of interest in the operating systems research community in prefetching and caching data from parallel disks, as a technique for enabling serial applications to improve input--output (I/O) performance. In this paper, algorithms are considered for integrated prefetching and caching in a model with a fixed-size cache and any number of backing storage devices (disks). The integration of caching and prefetching with a single disk was previously considered by Cao, Felten, Karlin, and Li. Here, it is shown that the natural extension of their aggressive algorithm to the parallel disk case is suboptimal by a factor near the number of disks in the worst case. The main result is a new algorithm, reverse aggressive, with near-optimal performance for integrated prefetching and caching in the presence of multiple disks.
A Conformant Planner with Explicit Disjunctive Representation of Belief States This paper describes a novel and competitive complete con- formant planner. Key to the enhanced performance is an effi- cient encoding of belief states as disjunctive normal form for- mulae and an efficient procedure for computing the successor belief state. We provide experimental comparative evaluation on a large pool of benchmarks. The novel design provides great efficiency and enhanced scalability, along with the intu- itive structure of disjunctive normal form representations.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.071111
0.066667
0.013333
0.001481
0.000239
0.000015
0.000005
0.000002
0
0
0
0
0
0
On the Singularity in Deep Neural Networks. In this paper, we analyze a deep neural network model from the viewpoint of singularities. First, we show that there exist a large number of critical points introduced by a hierarchical structure in the deep neural network as straight lines. Next, we derive sufficient conditions for the deep neural network having no critical points introduced by a hierarchical structure.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Frog: A Framework for Context-Based File Systems This article presents a framework, Frog, for Context-Based File Systems (CBFSs) that aim at simplifying the development of context-based file systems and applications. Unlike existing informed-based context-aware systems, Frog is a unifying informed-based framework that abstracts context-specific solutions as views, allowing applications to make view selections according to application behaviors. The framework can not only eliminate overheads induced by traditional context analysis, but also simplify the interactions between the context-based file systems and applications. Rather than propagating data through solution-specific interfaces, views in Frog can be selected by inserting their names in file path strings. With Frog in place, programmers can migrate an application from one solution to another by switching among views rather than changing programming interfaces. Since the data consistency issues are automatically enforced by the framework, file-system developers can focus their attention on context-specific solutions. We implement two prototypes to demonstrate the strengths and overheads of our design. Inspired by an observation that there are more than 50&percnt; of small files (<4KB) in a file system, we create a Bi-context Archiving Virtual File System (BAVFS) that utilizes conservative and aggressive prefetching for the contexts of random and sequential reads. To improve the performance of random read-and-write operations, the Bi-context Hybrid Virtual File System (BHVFS) combines the update-in-place and update-out-of-place solutions for read-intensive and write-intensive contexts. Our experimental results show that the benefits of Frog-based CBFSs outweigh the overheads introduced by integrating multiple context-specific solutions.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Multichannel Sleep Stage Classification and Transfer Learning using Convolutional Neural Networks Current sleep medicine relies on the supervised analysis of polysomnographic measurements, comprising amongst others electroencephalogram (EEG), electromyogram (EMG), and electrooculogram (EOG) signals. Convolutional neural networks (CNN) provide an interesting framework to automated classification of sleep based on these raw waveforms. In this study, we compare existing CNN approaches to four databases of pathological and physiological subjects. The best performing model resulted in Cohen's Kappa of κ = 0.75 on healthy subjects and κ = 0.64 on patients suffering from a variety of sleep disorders. Further, we show the advantages of additional sensor data (i.e. EOG and EMG). Deep learning approaches require a lot of data which is scarce for less prevalent diseases. For this, we propose a transfer learning procedure by pretraining a model on large public data and fine-tune this on each subject from a smaller dataset. This procedure is demonstrated using a private REM Behaviour Disorder database, improving sleep classification by 24.4%.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Major Transitions in Political Order We present three major transitions that occur on the way to the elaborate and diverse societies of the modern era. Our account links the worlds of social animals such as pigtail macaques and monk parakeets to examples from human history, including 18th Century London and the contemporary online phenomenon of Wikipedia. From the first awareness and use of group-level social facts to the emergence of norms and their self-assembly into normative bundles, each transition represents a new relationship between the individual and the group. At the center of this relationship is the use of coarse-grained information gained via lossy compression. The role of top-down causation in the origin of society parallels that conjectured to occur in the origin and evolution of life itself.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An Object-Oriented Approach to Structured Parallel Programming Several kinds of parallel applications tend to employ regular patterns for communication between and internally to their components. Once the most commonly used patterns-such as pipelines, farms and trees-are identified (both in terms of their components and their communication), an environment can make them available as high-level abstractions to use in writing applications. This can lead to a structured style of parallel programming. The paper shows how this structured approach can be accommodated within an abject-oriented environment: on the one hand, a class library provides the patterns; on the other hand, programmers can define new patterns by exploiting inheritance. Several examples illustrate the approach and show that it can improve the usability of a parallel programming environment without sacrificing efficiency.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Everything you always wanted to know about planning (but were afraid to ask) Domain-independent planning is one of the long-standing sub-areas of Artificial Intelligence (AI), aiming at approaching human problem-solving flexibility. The area has long had an affinity towards playful illustrative examples, imprinting it on the mind of many a student as an area concerned with the rearrangement of blocks, and with the order in which to put on socks and shoes (not to mention the disposal of bombs in toilets). Working on the assumption that this "student" is you - the readers in earlier stages of their careers - I herein aim to answer three questions that you surely desired to ask back then already: What is it good for? Does it work? Is it interesting to do research in? Answering the latter two questions in the affirmative (of course!), I outline some of the major developments of the last decade, revolutionizing the ability of planning to scale up, and the understanding of the enabling technology. Answering the first question, I point out that modern planning proves to be quite useful for solving practical problems - including, perhaps, yours.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Parameterized Complexity Results for Agenda Safety in Judgment Aggregation Many problems arising in computational social choice are of high computational complexity, and some are located at higher levels of the Polynomial Hierarchy. We argue that a parameterized complexity analysis provides valuable insight into the factors contributing to the complexity of these problems, and can lead to practically useful algorithms. As a case study, we consider the problem of agenda safety for the majority rule in judgment aggregation, consider several natural parameters for this problem, and determine the parameterized complexity for each of these. Our analysis is aimed at obtaining fixed-parameter tractable (fpt) algorithms that use a small number of calls to a SAT solver. We identify several positive results, including several results where the problem can be fpt-reduced to a single SAT instance. In addition, we identify several negative results. We hope that this work may help initiate a structured parameterized complexity investigation of problems arising in the field of computational social choice that are located at higher levels of the Polynomial Hierarchy.
Parameterized complexity of optimal planning: a detailed map The goal of this paper is a systematic parameterized complexity analysis of different variants of propositional STRIPS planning. We identify several natural problem parameters and study all possible combinations of 9 parameters in 6 different settings. These settings arise, for instance, from the distinction if negative effects of actions are allowed or not. We provide a complete picture by establishing for each case either paraNP-hardness (i.e., the parameter combination does not help) or W[t]-completeness with t ∈ {1, 2} (i.e., fixed-parameter intractability), or FPT (i.e., fixed-parameter tractability).
Asymptotically optimal encodings of conformant planning in QBF The world is unpredictable, and acting intelligently requires anticipating possible consequences of actions that are taken. Assuming that the actions and the world are deterministic, planning can be represented in the classical propositional logic. Introducing nondeterminism (but not probabilities) or several initial states increases the complexity of the planning problem and requires the use of quantified Boolean formulae (QBF). The currently leading logic-based approaches to conditional planning use explicitly or implicitly a QBF with the prefix ∃∀∃. We present formalizations of the planning problem as QBF which have an asymptotically optimal linear size and the optimal number of quantifier alternations in the prefix: ∃∀ and ∀∃. This is in accordance with the fact that the planning problem (under the restriction to polynomial size plans) is on the second level of the polynomial hierarchy, not on the third.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Logic Programming and Negation: A Survey. We survey here various approaches which were proposed to incorporate negation in logicprograms. We concentrate on the proof-theoretic and model-theoretic issues and the relationshipsbetween them.1991 Mathematics Subject Classification: 68Q40, 68T15.CR Categories: F.3.2., F.4.1, H.3.3, I.2.3.Keywords and Phrases: negation, general logic programs, non-monotonic reasoning.Notes. The work of the first author was partly supported by ESPRIT Basic Research Action6810 (Compulog 2). The work...
A sufficient condition for backtrack-bounded search Backtrack search is often used to solve constraint satisfaction problems. A relationship involving the structure of the constraints is described that provides a bound on the backtracking required to advance deeper into the backtrack tree. This analysis leads to upper bounds on the effort required for solution of a class of constraint satisfaction problems. The solutions involve a combination of relaxation preprocessing and backtrack search. The bounds are expressed in terms of the structure of the constraint connections. Specifically, the effort is shown to have a bound exponential in the size of the largest biconnected component of the constraint graph, as opposed to the size of the graph as a whole.
Convergence of a Nonconforming Multiscale Finite Element Method The multiscale finite element method (MsFEM) [T. Y. Hou, X. H. Wu, and Z. Cai, Math. Comp., 1998, to appear; T. Y. Hou and X. H. Wu, J. Comput. Phys., 134 (1997), pp. 169--189] has been introduced to capture the large scale solutions of elliptic equations with highly oscillatory coefficients. This is accomplished by constructing the multiscale base functions from the local solutions of the elliptic operator. Our previous study reveals that the leading order error in this approach is caused by the ``resonant sampling,'' which leads to large error when the mesh size is close to the small scale of the continuous problem. Similar difficulty also arises in numerical upscaling methods. An oversampling technique has been introduced to alleviate this difficulty [T. Y. Hou and X. H. Wu, J. Comput. Phys., 134 (1997), pp. 169--189]. A consequence of the oversampling method is that the resulting finite element method is no longer conforming. Here we give a detailed analysis of the nonconforming error. Our analysis also reveals a new cell resonance error which is caused by the mismatch between the mesh size and the wavelength of the small scale. We show that the cell resonance error is of lower order. Our numerical experiments demonstrate that the cell resonance error is generically small and is difficult to observe in practice.
Efficient sparse coding algorithms Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that cap- ture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimiza- tion problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field sur- round suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.
Synchronized Disk Interleaving A group of disks may be interleaved to speed up data transfers in a manner analogous to the speedup achieved by main memory interleaving. Conventional disks may be used for interleaving by spreading data across disks and by treating multiple disks as if they were a single one. Furthermore, the rotation of the interleaved disks may be synchronized to simplify control and also to optimize performance. In addition, check- sums may be placed on separate check-sum disks in order to improve reliability. In this paper, we study synchronized disk interleaving as a high-performance mass storage system architecture. The advantages and limitations of the proposed disk interleaving scheme are analyzed using the M/G/1 queueing model and compared to the conventional disk access mechanism.
Downward Separation Fails Catastrophically for Limited Nondeterminism Classes The $\beta$ hierarchy consists of classes $\beta_k={\rm NP}[logkn]\subseteq {\rm NP}$. Unlike collapses in the polynomial hierarchy and the Boolean hierarchy, collapses in the $\beta$ hierarchy do not seem to translate up, nor does closure under complement seem to cause the hierarchy to collapse. For any consistent set of collapses and separations of levels of the hierarchy that respects ${\rm P} = \beta_1\subseteq \beta_2\subseteq \cdots \subseteq {\rm NP}$, we can construct an oracle relative to which those collapses and separations hold; at the same time we can make distinct levels of the hierarchy closed under computation or not, as we wish. To give two relatively tame examples: for any $k \geq 1$, we construct an oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} \neq \beta_{k+2} \neq \cdots \] and another oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} = {\rm PSPACE}. \] We also construct an oracle relative to which $\beta_{2k} = \beta_{2k+1} \neq \beta_{2k+2}$ for all k.
Diagnostic reasoning with A-Prolog In this paper, we suggest an architecture for a software agent which operates a physical device and is capable of making observations and of testing and repairing the device's components. We present simplified definitions of the notions of symptom, candidate diagnosis, and diagnosis which are based on the theory of action language ${\cal AL}$. The definitions allow one to give a simple account of the agent's behavior in which many of the agent's tasks are reduced to computing stable models of logic programs.
ARIMA time series modeling and forecasting for adaptive I/O prefetching Bursty application I/O patterns, together with transfer limited storage devices, combine to create a major I/O bottleneck on parallel systems. This paper explores the use of time series models to forecast application I/O request times, then prefetching I/O requests during computation intervals to hide I/O latency. Experimental results with I/O intensive scientific codes show performance improvements compared to standard UNIX prefetching strategies.
When Multivariate Forecasting Meets Unsupervised Feature Learning - Towards a Novel Anomaly Detection Framework for Decision Support. Many organizations adopt information technologies to make intelligent decisions during operations. Time-series data plays a crucial role in supporting such decision making processes. Though current studies on time-series based decision making provide reasonably well results, the anomaly detection essence underling most of the scenarios and the plenitude of unlabeled data are largely overlooked and left unexplored. We argue that by using multivariate forecasting and unsupervised feature learning, these two important research gaps could be filled. We carried out two experiments in this study to testify our approach and the results showed that decision support performance was significantly improved. We also proposed a novel framework to integrate the two methods so that our approach may be generalized to a larger problem domain. We discussed the advantages, the limitations and the future work of our study. Both practical and theoretical contributions were also discussed in the paper. © 2012 by the AIS/ICIS Administrative Office All rights reserved.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.033333
0.025
0
0
0
0
0
0
0
0
0
0
0
Automatic recovery from runtime failures We present a technique to make applications resilient to failures. This technique is intended to maintain a faulty application functional in the field while the developers work on permanent and radical fixes. We target field failures in applications built on reusable components. In particular, the technique exploits the intrinsic redundancy of those components by identifying workarounds consisting of alternative uses of the faulty components that avoid the failure. The technique is currently implemented for Java applications but makes little or no assumptions about the nature of the application, and works without interrupting the execution flow of the application and without restarting its components. We demonstrate and evaluate this technique on four mid-size applications and two popular libraries of reusable components affected by real and seeded faults. In these cases the technique is effective, maintaining the application fully functional with between 19% and 48% of the failure-causing faults, depending on the application. The experiments also show that the technique incurs an acceptable runtime overhead in all cases.
Handling Software Faults with Redundancy Software engineering methods can increase the dependability of software systems, and yet some faults escape even the most rigorous and methodical development process. Therefore, to guarantee high levels of reliability in the presence of faults, software systems must be designed to reduce the impact of the failures caused by such faults, for example by deploying techniques to detect and compensate for erroneous runtime conditions. In this chapter, we focus on software techniques to handle software faults, and we survey several such techniques developed in the area of fault tolerance and more recently in the area of autonomic computing. Since practically all techniques exploit some form of redundancy, we consider the impact of redundancy on the software architecture, and we propose a taxonomy centered on the nature and use of redundancy in software systems. The primary utility of this taxonomy is to classify and compare techniques to handle software faults.
A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.
Extended stable semantics for normal and disjunctive programs
The nature of statistical learning theory~. First Page of the Article
A machine program for theorem-proving The programming of a proof procedure is discussed in connection with trial runs and possible improvements.
An Introduction to Least Commitment Planning Recent developments have clarified the process of generating partially ordered, partially specified sequences of actions whose execution will achieve an agent's goal. This article summarizes a progression of least commitment planners, starting with one that handles the simple STRIPS representation and ending with UCOPOP a planner that manages actions with disjunctive precondition, conditional effects, and universal quantification over dynamic universes. Along the way, I explain how Chapman's formulation of the modal truth criterion is misleading and why his NP-completeness result for reasoning about plans with conditional effects does not apply to UCOPOP.
Support-Vector Networks The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Improving the I/O Performance of Real-Time Database Systems with Multiple-Disk Storage Structures
Simultaneous Localization And Mapping With Sparse Extended Information Filters In this paper we describe a scalable algorithm for the simultaneous mapping and localization (SLAM) problem. SLAM is the problem of acquiring a map of a static environment with a mobile robot. The vast majority of SLAM algorithms are based on the extended Kahnan filter (EKF), In this paper we advocate an algorithm that relies on the dual of the EKE the extended information filter (EIF). We show that when represented in the information form, map posteriors are dominated by a small number of links that tie together nearby features in the map. This insight is developed into a sparse variant of the EIF called the sparse extended information filter (SEIF). SEIFs represent maps by graphical networks of features that are locally interconnected, where links represent relative information between pairs of nearby,features, as well as information about the robot's pose relative to the map. We show that all essential update equations in SEIFs can be executed in constant time, irrespective of the size of the map. We also provide empirical results obtained for a benchmark data set collected in an outdoor environment, and using a multi-robot mapping simulation.
A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system.
iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.2
0.000219
0
0
0
0
0
0
0
0
0
0
0
Error propagation in sparse linear systems with peptide-protein incidence matrices We study the additive errors in solutions to systems Ax =b of linear equations where vector b is corrupted, with a focus on systems where A is a 0,1-matrix with very sparse rows. We give a worst-case error bound in terms of an auxiliary LP, as well as graph-theoretic characterizations of the optimum of this error bound in the case of two variables per row. The LP solution indicates which measurements should be combined to minimize the additive error of any chosen variable. The results are applied to the problem of inferring the amounts of proteins in a mixture, given inaccurate measurements of the amounts of peptides after enzymatic digestion. Results on simulated data (but from real proteins split by trypsin) suggest that the errors of most variables blow up by very small factors only.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval In this paper, we propose a new latent semantic model that incorporates a convolutional-pooling structure over word sequences to learn low-dimensional, semantic vector representations for search queries and Web documents. In order to capture the rich contextual structures in a query or a document, we start with each word within a temporal context window in a word sequence to directly capture contextual features at the word n-gram level. Next, the salient word n-gram features in the word sequence are discovered by the model and are then aggregated to form a sentence-level feature vector. Finally, a non-linear transformation is applied to extract high-level semantic information to generate a continuous vector representation for the full text string. The proposed convolutional latent semantic model (CLSM) is trained on clickthrough data and is evaluated on a Web document ranking task using a large-scale, real-world data set. Results show that the proposed model effectively captures salient semantic information in queries and documents for the task while significantly outperforming previous state-of-the-art semantic models.
Learning Continuous Phrase Representations For Translation Modeling This paper tackles the sparsity problem in estimating phrase translation probabilities by learning continuous phrase representations, whose distributed nature enables the sharing of related phrases in their representations. A pair of source and target phrases are projected into continuous-valued vector representations in a low-dimensional latent space, where their translation score is computed by the distance between the pair in this new space. The projection is performed by a neural network whose weights are learned on parallel training data. Experimental evaluation has been performed on two WMT translation tasks. Our best result improves the performance of a state-of-the-art phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.3 BLEU points.
Modeling Interestingness with Deep Neural Networks.
Neural Models for Information Retrieval. Neural ranking models for information retrieval (IR) use shallow or deep neural networks to rank search results in response to a query. Traditional learning to rank models employ machine learning techniques over hand-crafted IR features. By contrast, neural models learn representations of language from raw text that can bridge the gap between query and document vocabulary. Unlike classical IR models, these new machine learning based approaches are data-hungry, requiring large scale training data before they can be deployed. This tutorial introduces basic concepts and intuitions behind neural IR models, and places them in the context of traditional retrieval models. We begin by introducing fundamental concepts of IR and different neural and non-neural approaches to learning vector representations of text. We then review shallow neural IR methods that employ pre-trained neural term embeddings without learning the IR task end-to-end. We introduce deep neural networks next, discussing popular deep architectures. Finally, we review the current DNN models for information retrieval. We conclude with a discussion on potential future directions for neural IR.
Unsupervised and Transfer Learning Challenge: a Deep Learning Approach.
Deep learning applications and challenges in big data analytics Big Data Analytics and Deep Learning are two high-focus of data science. Big Data has become important as many organizations both public and private have been collecting massive amounts of domain-specific information, which can contain useful information about problems such as national intelligence, cyber security, fraud detection, marketing, and medical informatics. Companies such as Google and Microsoft are analyzing large volumes of data for business analysis and decisions, impacting existing and future technology. Deep Learning algorithms extract high-level, complex abstractions as data representations through a hierarchical learning process. Complex abstractions are learnt at a given level based on relatively simpler abstractions formulated in the preceding level in the hierarchy. A key benefit of Deep Learning is the analysis and learning of massive amounts of unsupervised data, making it a valuable tool for Big Data Analytics where raw data is largely unlabeled and un-categorized. In the present study, we explore how Deep Learning can be utilized for addressing some important problems in Big Data Analytics, including extracting complex patterns from massive volumes of data, semantic indexing, data tagging, fast information retrieval, and simplifying discriminative tasks. We also investigate some aspects of Deep Learning research that need further exploration to incorporate specific challenges introduced by Big Data Analytics, including streaming data, high-dimensional data, scalability of models, and distributed computing. We conclude by presenting insights into relevant future works by posing some questions, including defining data sampling criteria, domain adaptation modeling, defining criteria for obtaining useful data abstractions, improving semantic indexing, semi-supervised learning, and active learning.
Histograms of Oriented Gradients for Human Detection We study the question of feature sets for robust visual object recognition, adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of Histograms of Oriented Gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
Differentiable Sparse Coding Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a laplacian (L1) that promotes sparsity. We show how smoother priors can pre- serve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Additionally, we show how to calculate the derivative of the MAP estimate effi- ciently with implicit differentiation. One prior that can be differentiated this way is KL-regularization. We demonstrate its effectiveness on a wide variety of appli- cations, and find that online optimization of the parameters of the KL-regularized model can significantly improve prediction performance.
Constituent Parsing with Incremental Sigmoid Belief Networks We introduce a framework for syntactic parsing with latent variables based on a form of dynamic Sigmoid Belief Networks called Incremental Sigmoid Belief Networks. We demonstrate that a previous feed-forward neural network parsing model can be viewed as a coarse approximation to inference with this class of graphical model. By construct- ing a more accurate but still tractable ap- proximation, we significantly improve pars- ing accuracy, suggesting that ISBNs provide a good idealization for parsing. This gener- ative model of parsing achieves state-of-the- art results on WSJ text and 8% error reduc- tion over the baseline neural network parser.
Improving Generalization of Neural Networks Through Pruning
Detour: Informed Internet Routing and Transport Despite its obvious success, robustness, and scalability, the Internet suffers from a number of end-to-end performance and availability problems. In this paper, we attempt to quantify the Internet's inefficiencies and then we argue that Internet behavior can be improved by spreading intelligent routers at key access and interchange points to actively manage traffic. Our Detour prototype aims to demonstrate practical benefits to end users, without penalizing non-Detour users, by aggregating traffic information across connections and using more efficient routes to improve Internet performance.
Portable run-time support for dynamic object-oriented parallel processing Mentat is an object-oriented parallel processing system designed to simplify the task of writing portable parallel programs for parallel machines and workstation networks. The Mentat compiler and run-time system work together to automatically manage the communication and synchronization between objects. The run-time system marshals member function arguments, schedules objects on processors, and dynamically constructs and executes large-grain data dependence graphs. In this article we present the Mentat run-time system. We focus on three aspects—the software architecture, including the interface to the compiler and the structure and interaction of the principle components of the run-time system; the run-time overhead on a component-by-component basis for two platforms, a Sun SparcStation 2 and an Intel Paragon; and an analysis of the minimum granularity required for application programs to overcome the run-time overhead.
Beyond striping: the bridge multiprocessor file system High-performance parallel computers require high-performance file systems. Exotic I/O hardware will be of little use if file system software runs on a single processor of a many-processor machine. We believe that cost-effective I/O for large multiprocessors can best be obtained by spreading both data and file system computation over a large number of processors and disks. To assess the effectiveness of this approach, we have implemented a prototype system called Bridge, and have studied its performance on several data intensive applications, among them external sorting. A detailed analysis of our sorting algorithm indicates that Bridge can profitably be used on configurations in excess of one hundred processors with disks. Empirical results on a 32-processor implementation agree with the analysis, providing us with a high degree of confidence in this prediction. Based on our experience, we argue that file systems such as Bridge will satisfy the I/O needs of a wide range of parallel architectures and applications.
Unsupervised (Parameter) Learning For Mrfs On Bipartite Graphs We consider unsupervised (parameter) learning for general Markov random fields on bipartite graphs. This model class includes Restricted Boltzmann Machines. We show that besides the widely used stochastic gradient approximation (a.k.a. Persistent Contrastive Divergence) there is an alternative learning approach - a modified EM algorithm which is tractable because of the bipartiteness of the model graph. We compare the resulting double loop algorithm and the PCD learning experimentally and show that the former converges faster and more stable than the latter.
1.021093
0.012244
0.011072
0.01
0.002593
0.001053
0.000196
0.000013
0.000004
0
0
0
0
0
Limits for Compact Representation of Plans.
On the complexity of case-based planning This paper analyses the computational complexity of problems related to case-based planning: planning when a plan for a similar instance is known, and planning from a library of plans. It is proven that planning from a single case has the same complexity than generative planning (i.e. planning 'from scratch'); using an extended definition of cases, complexity is reduced if the domain stored in the case is similar to the one to search plans for. Planning from a library of cases is shown to have the same complexity. In both cases, the complexity of planning remains, in the worst case, PSPACE-complete.
Conformant plans and beyond: Principles and complexity Conformant planning is used to refer to planning for unobservable problems whose solutions, like classical planning, are linear sequences of operators called linear plans. The term 'conformant' is automatically associated with both the unobservable planning model and with linear plans, mainly because the only possible solutions for unobservable problems are linear plans. In this paper we show that linear plans are not only meaningful for unobservable problems but also for partially-observable problems. In such case, the execution of a linear plan generates observations from the environment which must be collected by the agent during the execution of the plan and used at the end in order to determine whether the goal had been achieved or not; this is the typical case in problems of diagnosis in which all the actions are knowledge-gathering actions. Thus, there are substantial differences about linear plans for the case of unobservable or fully-observable problems, and for the case of partially-observable problems: while linear plans for the former model must conform with properties in state space, linear plans for partially-observable problems must conform with properties in belief space. This differences surface when the problems are allowed to express epistemic goals and conditions using modal logic, and place the plan-existence decision problem in different complexity classes. Linear plans is one extreme point in a discrete spectrum of solution forms for planning problems. The other extreme point is contingent plans in which there is a branch point for every possible observation at each time step, and thus the number of branch points is not bounded a priori. In the middle of the spectrum, there are plans with a bounded number of branch points. Thus, linear plans are plans with zero branch points and contingent plans are plans with unbounded number of branch points. In this work, we lay down foundations and principles for the general treatment of linear plans and plans of bounded branching, and provide exact complexity results for novel decision problems. We also show that linear plans for partially-observable problems are not only of theoretical interest since some challenging real-life problems can be dealt with them.
All PSPACE-Complete Planning Problems Are Equal but Some Are More Equal than Others.
Algorithms and limits for compact plan representations Compact representations of objects is a common concept in computer science. Automated planning can be viewed as a case of this concept: a planning instance is a compact implicit representation of a graph and the problem is to find a path (a plan) in this graph. While the graphs themselves are represented compactly as planning instances, the paths are usually represented explicitly as sequences of actions. Some cases are known where the plans always have compact representations, for example, using macros. We show that these results do not extend to the general case, by proving a number of bounds for compact representations of plans under various criteria, like efficient sequential or random access of actions. In addition to this, we show that our results have consequences for what can be gained from reformulating planning into some other problem. As a contrast to this we also prove a number of positive results, demonstrating restricted cases where plans do have useful compact representations, as well as proving that macro plans have favourable access properties. Our results are finally discussed in relation to other relevant contexts.
Plan reuse versus plan generation: a theoretical and empirical analysis The ability of a planner to reuse parts of old plans is hypothesized to be a valuabletool for improving efficiency of planning by avoiding the repetition of the sameplanning effort. We test this hypothesis from an analytical and empirical point ofview. A comparative worst-case complexity analysis of generation and reuse underdifferent assumptions reveals that it is not possible to achieve a provable efficiencygain of reuse over generation. Further, assuming &quot;conservative&quot; plan...
Planning in a hierarchy of abstraction spaces Additive AND/OR graphs are defined as AND/ OR graphs without circuits, which can be considered as folded AND/OR trees; i. e. the cost of a common subproblem is added to the cost as many times as the subproblem occurs, but it is computed only once. Additive ...
New Islands of tractability of cost-optimal planning We study the complexity of cost-optimal classical planning over propositional state variables and unary-effect actions. We discover novel problem fragments for which such optimization is tractable, and identify certain conditions that differentiate between tractable and intractable problems. These results are based on exploiting both structural and syntactic characteristics of planning problems. Specifically, following Brafman and Domshlak (2003), we relate the complexity of planning and the topology of the causal graph. The main results correspond to tractability of cost-optimal planning for propositional problems with polytree causal graphs that either have O(1)-bounded in-degree, or are induced by actions having at most one prevail condition each. Almost all our tractability results are based on a constructive proof technique that connects between certain tools from planning and tractable constraint optimization, and we believe this technique is of interest on its own due to a clear evidence for its robustness.
The FF planning system: fast plan generation through heuristic search We describe and evaluate the algorithmic techniques that are used in the FF planning system. Like the HSP system, FF relies on forward state space search, using a heuristic that estimates goal distances by ignoring delete lists. Unlike HSP's heuristic, our method does not assume facts to be independent. We introduce a novel search strategy that combines hill-climbing with systematic search, and we show how other powerful heuristic information can be extracted and used to prune the search space. FF was the most successful automatic planner at the recent AIPS-2000 planning competition. We review the results of the competition, give data for other benchmark domains, and investigate the reasons for the runtime performance of FF compared to HSP.
Logic programming and knowledge representation In this paper, we review recent work aimed at the application of declarative logic programming to knowledge representation in artificial intelligence. We consider extensions of the language of definite logic programs by classical (strong) negation, disjunction, and some modal operators and show how each of the added features extends the representational power of the language.
Continuous retrieval of multimedia data using parallelism Most implementations of workstation-based multimedia information systems cannot support a continuous display of high resolution audio and video data and suffer from frequent disruptions and delays termed hiccups. This is due to the low I/O bandwidth of the current disk technology, the high bandwidth requirement of multimedia objects, and the large size of these objects, which requires them to be almost always disk resident. A parallel multimedia information system and the key technical ideas that enable it to support a real-time display of multimedia objects are described. In this system, a multimedia object across several disk drives is declustered, enabling the system to utilize the aggregate bandwidth of multiple disks to retrieve an object in real-time. Then, the workload of an application is distributed evenly across the disk drives to maximize the processing capability of the system. To support simultaneous display of several multimedia objects for different users, two alternative approaches are described. The first approach multitasks a disk drive among several requests while the second replicates the data and dedicates resources to each individual request. The trade-offs associated with each approach are investigated using a simulation model.
PREFAIL: a programmable tool for multiple-failure injection As hardware failures are no longer rare in the era of cloud computing, cloud software systems must "prevail" against multiple, diverse failures that are likely to occur. Testing software against multiple failures poses the problem of combinatorial explosion of multiple failures. To address this problem, we present PreFail, a programmable failure-injection tool that enables testers to write a wide range of policies to prune down the large space of multiple failures. We integrate PreFail to three cloud software systems (HDFS, Cassandra, and ZooKeeper), show a wide variety of useful pruning policies that we can write for them, and evaluate the speed-ups in testing time that we obtain by using the policies. In our experiments, our testing approach with appropriate policies found all the bugs that one can find using exhaustive testing while spending 10X--200X less time than exhaustive testing.
Concurrent Updates on Striped Data Streams in Clustered Server Systems
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.074005
0.02387
0.021657
0.018662
0.010746
0.006331
0.002205
0.000604
0.000102
0.000004
0
0
0
0
Disaster Recovery in Cloud Computing: A Survey.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Cashing in on hints for better prefetching and caching in PVFS and MPI-IO In this work, we propose, implement and test a novel approach to the management of parallel I/O in high-performance computing. Our proposed approach is built upon three complementary ideas: (i) allowing users to place hints into the application code indicating high-level data access patterns, (ii) enabling an optimizing compiler to process these hints and develop I/O optimization strategies, and (iii) enhancing the I/O stack to accept these optimizations and process them across the different layers in the stack. We describe a general hint processing framework that accommodates this approach and demonstrate its potential by applying it to two sample problems: (i) shared storage cache management and (ii) I/O prefetching. In the former, our approach decides, at each program point of interest, the ideal set of data blocks to keep in shared storage caches in the I/O stack, and in the latter, the high-level data access pattern is propagated from application layer to the parallel file system layer for prefetching data from the storage subsystem. Our approach is designed to complement and work synergistically with the MPI-IO and PVFS frameworks and exploits the characteristics of applications written using these software. We tested our approach using both synthetic data access patterns and disk I/O intensive application programs. The results collected indicate that the proposed approach improves over existing storage caching and I/O prefetching schemes by 28% and 66%, respectively.
AMP: An Affinity-Based Metadata Prefetching Scheme in Large-Scale Distributed Storage Systems Prefetching is an effective technique for improving file access performance, which can significantly reduce access latency for I/O systems. In distributed storage systems, prefetching for metadata files is critical for the overall system performance. In this paper, an affinity-based metadata prefetching (AMP) scheme is proposed for metadata servers in large-scale distributed storage systems to provide aggressive metadata prefetching. Through mining useful information about metadata accesses from past history, AMP can discover metadata file affinities accurately and intelligently for prefetching. Compared with LRU and some of the latest file prefetching algorithms such as Nexus and C-Miner, our trace-driven simulations show that AMP can improve buffer cache hit rates by up to 12%, 4.5% and 4% respectively, while reduce the average response time by up to 60%, 12% and 8%, respectively.
Learning to classify parallel input/output access patterns Input/output performance on current parallel file systems is sensitive to a good match of application access patterns to file system capabilities. Automatic input/output access pattern classification can determine application access patterns at execution time, guiding adaptive file system policies. In this paper, we examine and compare two novel input/output access pattern classification methods based on learning algorithms. The first approach uses a feedforward neural network previously trained on access pattern benchmarks to generate qualitative classifications. The second approach uses hidden Markov models trained on access patterns from previous executions to create a probabilistic model of input/output accesses. In a parallel application, access patterns can be recognized at the level of each local thread or as the global interleaving of all application threads. Classification of patterns at both levels is important for parallel file system performance; we propose a method for forming global classifications from local classifications. We present results from parallel and sequential benchmarks and applications that demonstrate the viability of this approach.
Context-aware prefetching at the storage server In many of today's applications, access to storage constitutes the major cost of processing a user request. Data prefetching has been used to alleviate the storage access latency. Under current prefetching techniques, the storage system prefetches a batch of blocks upon detecting an access pattern. However, the high level of concurrency in today's applications typically leads to interleaved block accesses, which makes detecting an access pattern a very challenging problem. Towards this, we propose and evaluate QuickMine, a novel, lightweight and minimally intrusive method for contextaware prefetching. Under QuickMine, we capture application contexts, such as a transaction or query, and leverage them for context-aware prediction and improved prefetching effectiveness in the storage cache. We implement a prototype of our context-aware prefetching algorithm in a storage-area network (SAN) built using Network Block Device (NBD). Our prototype shows that context-aware prefetching clearly out-performs existing context-oblivious prefetching algorithms, resulting in factors of up to 2 improvements in application latency for two e-commerce workloads with repeatable access patterns, TPC-W and RUBiS.
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days.
Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented.
Indexing By Latent Semantic Analysis
Disk Shadowing Disk shadowing is a technique for maintaining a set of two or more identical disk images on separate disk devices. Its primary purpose is to enhance reliability and availability of secondary storage by providing multiple paths to redundant data. However, shadowing can also boost I/O performance. In this paper, we contend that intelligent device scheduling of shadowed discs increases the I/O rate by allowing parallel reads and by substantially reducing the average seek time for random reads. In particular, we develop and analytic model which shows that the seek time for a random read in a shadow set is a monotonic decreasing function of the number of disks.
Fine-Grained Mobility in the Emerald System (Extended Abstract)
Representing actions in logic programs and default theories a situation calculus approach We address the problem of representing common sense knowledge about action domains in the formalisms of logic programming and default logic. We employ a methodology proposed by Gelfond and Lifschitz which involves first defining a high-level language for representing knowledge about actions, and then specifying a translation from the high-level action language into a general-purpose formalism, such as logic programming. Accordingly, we define a high-level action languageAE, and specify sound and complete translations of portions ofAEinto logic programming and default logic. The languageAEincludes propositions that represent “static causal laws” of the following kind: a fluent formula ψ can be made true by making a fluent formula true (or, more precisely, ψ is caused whenever is caused). Such propositions are more expressive than the state constraints traditionally used to represent background knowledge. Our translations ofAEdomain descriptions into logic programming and default logic are simple, in part because the noncontrapositive nature of causal laws is easily reflected in such rule-based formalisms.
A cost-benefit scheme for high performance predictive prefetching
When Multivariate Forecasting Meets Unsupervised Feature Learning - Towards a Novel Anomaly Detection Framework for Decision Support. Many organizations adopt information technologies to make intelligent decisions during operations. Time-series data plays a crucial role in supporting such decision making processes. Though current studies on time-series based decision making provide reasonably well results, the anomaly detection essence underling most of the scenarios and the plenitude of unlabeled data are largely overlooked and left unexplored. We argue that by using multivariate forecasting and unsupervised feature learning, these two important research gaps could be filled. We carried out two experiments in this study to testify our approach and the results showed that decision support performance was significantly improved. We also proposed a novel framework to integrate the two methods so that our approach may be generalized to a larger problem domain. We discussed the advantages, the limitations and the future work of our study. Both practical and theoretical contributions were also discussed in the paper. © 2012 by the AIS/ICIS Administrative Office All rights reserved.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.1
0.015385
0.008333
0
0
0
0
0
0
0
0
0
0
Horizontal and Vertical Ensemble with Deep Representation for Classification. Representation learning, especially which by using deep learning, has been widely applied in classification. However, how to use limited size of labeled data to achieve good classification performance with deep neural network, and how can the learned features further improve classification remain indefinite. In this paper, we propose Horizontal Voting Vertical Voting and Horizontal Stacked Ensemble methods to improve the classification performance of deep neural networks. In the ICML 2013 Black Box Challenge, via using these methods independently, Bing Xu achieved 3rd in public leaderboard, and 7th in private leaderboard; Jingjing Xie achieved 4th in public leaderboard, and 5th in private leaderboard.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On Some Tractable Cases of Logical Filtering Filtering denotes any method whereby an agent updates its belief state—its knowledge of the state of the world—from a sequence of actions and observations. In logical filter- ing, the belief state is a logical formula describing the pos- sible world states. Efficient algorithms for logical filtering bear important implications on reasoning tasks such as plan- ning and diagnosis. In this paper, we will identify classes of transition constraints that are amenable to compact and indefinite filtering—presenting efficient algorithms wherever necessary. We will first show that connected row-convex (CRC) constraints are amenable to efficient filtering when path-consistency is enforced in appropriate steps. We will then extend this theory to provide a filtering algorithm based on repeatedly enforcing path-consistency and embedding the domain values of the related variables in tree structures to guarantee global consistency. Finally, we will identify and comment on the problem of multi-agent localization as a po- tential application of the theory developed in the paper (under some reasonable assumptions).
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Tree clustering for constraint networks The paper offers a systematic way of regrouping constraints into hierarchical structures capable of supporting search without backtracking. The method involves the formation and preprocessing of an acyclic database that permits a large variety of queries and local perturbations to be processed swiftly, either by sequential backtrack-free procedures, or by distributed constraint propagation processes.
Constraint Satisfaction from a Deductive Viewpoint This paper reports the result of testing the author''s proof techniques on the class of constraint satisfaction problems (CSP). This experiment has been successful in the sense that a completely general proof technique turns out to behave well also for this special class of problems which itself has received considerable attention in the community. So at the same time the paper happens to present a new (deductive) mechanism for solving constraint satisfaction problems that is of interest in its own right. This mechanism may be characterized as a bottom-up, lazy-evaluation technique which reduces any such problem to the problem of evaluating a database expression typically involving a number of joins. A way of computing such an expression is proposed.
Projection in Decomposed Situation Calculus We investigate the impact of decomposition on projection in the situation calculus. We show that performing projection with situation calcu- lus theories can benefit from their decomposition into parts associated with sub-domains. Partic- ularly, we provide message-passing algorithms that take advantage of the particular structure of situation calculus theories to perform the task of projection. These algorithms are shown to be sound and complete for this task for different sce- narios, including actions with non-deterministic effects, partially specified initial situation and ob- servations in situations later than the first one. They can be used for distributed reasoning about situation calculus theories or to speed up com- putation, in those cases where they are efficient. We characterize the kind of messages that must be sent between partitions for each of our algo- rithms and scenarios. This allows us to provide computational complexity results for the pro- posed algorithms under some assumptions. Our results are important for analyzing and devising planning, diagnosis and control algorithms for large domains that are made of interacting parts.
Compendium of Parameterized Problems at Higher Levels of the Polynomial Hierarchy. We present a list of parameterized problems together with a complexity classification of whether they allow a fixed-parameter tractable reduction to SAT or not. These problems are parameterized versions of problems whose complexity lies at the second level of the Polynomial Hierarchy or higher.
Extremal problems in logic programming and stable model computation We study the following problem: given a class of logic programs ¢, determine the maximum number of stable models of a program from ©. We establish the maximum for the class of all logic programs with at most n clauses, and for the class of all logic programs of size at most n. We also characterize the programs for which the maxima are attained. We obtained similar results for the class of all disjunctive logic programs with at most n clauses, each of length at most m, and for the class of all disjunctive logic programs of size at most n. Our results on logic programs have direct implication for the design of algorithms to compute stable models. Several such algorithms, similar in spirit to the Davis-Putnam procedure, are described in the paper. Our results imply that there is an algorithm that finds all stable models of a program with n clauses after considering the search space of size O(3n/3) in the worst case. Our results also provide some insights into the question of representability of families of sets as families of stable models of logic programs.
Fixed-parameter intractability The authors consider the complexity behavior of parametrized problems that they term fixed-parameter tractability: for each fixed parameter value y the problem is solvable in time O( nc), where c is a constant independent of the parameter y. They introduce a structure theory with which to address the apparent intractability of some parameterized problems, and they obtain completeness, density, and separation/collapse results. The greatest appeal of the theory is in the wide range of natural problems to which it can be applied, and in the practical significance of fixed-parameter problem complexities. Technical aspects are also interesting
Parameterized Complexity and Kernel Bounds for Hard Planning Problems The propositional planning problem is a notoriously difficult computational problem. Downey et al. (1999) initiated the parameterized analysis of planning (with plan length as the parameter) and B\"ackstr\"om et al. (2012) picked up this line of research and provided an extensive parameterized analysis under various restrictions, leaving open only one stubborn case. We continue this work and provide a full classification. In particular, we show that the case when actions have no preconditions and at most $e$ postconditions is fixed-parameter tractable if $e\leq 2$ and W[1]-complete otherwise. We show fixed-parameter tractability by a reduction to a variant of the Steiner Tree problem; this problem has been shown fixed-parameter tractable by Guo et al. (2007). If a problem is fixed-parameter tractable, then it admits a polynomial-time self-reduction to instances whose input size is bounded by a function of the parameter, called the kernel. For some problems, this function is even polynomial which has desirable computational implications. Recent research in parameterized complexity has focused on classifying fixed-parameter tractable problems on whether they admit polynomial kernels or not. We revisit all the previously obtained restrictions of planning that are fixed-parameter tractable and show that none of them admits a polynomial kernel unless the polynomial hierarchy collapses to its third level.
Fixed-Parameter Algorithms For Artificial Intelligence, Constraint Satisfaction and Database Problems We survey the parameterized complexity of problems that arise in artificial intelligence, database theory and automated reasoning. In particular, we consider various parameterizations of the constraint satisfaction problem, the evaluation problem of Boolean conjunctive database queries and the propositional satisfiability problem. Furthermore, we survey parameterized algorithms for problems arising in the context of the stable model semantics of logic programs, for a number of other problems of non-monotonic reasoning, and for the computation of cores in data exchange.
Dynamic Multi-Resource Load Balancing in Parallel Database Systems
Heuristic search + symbolic model checking = efficient conformant planning We consider the problem of how an agent creates a discrete spatial representation from its continuous interactions with the environment. Such representation will be the minimal one that explains the experiences of the agent in the environment. In this ...
Transaction support in read optimized and write optimized file systems This paper provides a comparative analysis of five implementations of transaction support. The first of the methods is the traditional approach of implementing transaction processing within a data manager on top of a read optimized file system. The second also assumes a traditional file system but embeds transaction support inside the file system. The third model considers a tradi- tional data manager on top of a write optimized file sys- tem. The last two models both embed transaction sup- port inside a write optimized file system, each using a different logging mechanism. Our results show that in a transaction processing environment, a write optimized file system often yields better performance than one optimized for reads. In addition, we show that file system embedded transaction managers can perform as well as data managers when transaction throughput is limited by I/O bandwidth. Finally, even when the CPU is the critical resource, the difference in performance between a data manager and an embedded system is much smaller than previous work has shown.
Regularization and Semi-Supervised Learning on Large Graphs We consider the problem of labeling a partially labeled graph. This setting may arise in a number of situations from survey sampling to information retrieval to pattern recognition in manifold settings. It is also of potential practical importance, when the data is abundant, but labeling is expensive or requires human assistance. Our approach develops a framework for regularization on such graphs. The algorithms are very simple and involve solving a single, usually sparse, system of linear equations. Using the notion of algorithmic stability, we derive bounds on the generalization error and relate it to structural invariants of the graph. Some experimental results testing the performance of the regularization algorithm and the usefulness of the generalization bound are presented.
FAWNdamentally power-efficient clusters
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.019558
0.022442
0.017829
0.017143
0.01461
0.00782
0.005001
0.000482
0.000049
0.000005
0
0
0
0
Object detection using hybridization of static and dynamic feature spaces and its exploitation by ensemble classification. This paper presents a learning mechanism based on hybridization of static and dynamic learning. Realizing the detection performances offered by the state-of-the-art deep learning techniques and the competitive performances offered by the conventional static learning techniques, we propose the idea of exploitation of the concatenated (parallel) hybridization of the static and dynamic learning-based feature spaces. This is contrary to the cascaded (series) hybridization topology in which the initial feature space (provided by the conventional, static, and handcrafted feature extraction technique) is explored using deep, dynamic, and automated learning technique. Consequently, the characteristics already suppressed by the conventional representation cannot be explored by the dynamic learning technique. Instead, the proposed technique combines the conventional static and deep dynamic representation in concatenated (parallel) topology to generate an information-rich hybrid feature space. Thus, this hybrid feature space may aggregate the good characteristics of both conventional and deep representations, which are then explored using an appropriate classification technique. We also hypothesize that ensemble classification may better exploit this parallel hybrid perspective of the feature spaces. For this purpose, pyramid histogram of oriented gradients-based static learning has been incorporated in conjunction with convolution neural network-based deep learning to produce concatenated hybrid feature space. This hybrid space is then explored with various state-of-the-art ensemble classification techniques. We have considered the publicly available INRIA person and Caltech pedestrian standard image datasets to assess the performance of the proposed hybrid learning system. Furthermore, McNemar’s test has been used to statistically validate the outperformance of the proposed technique over various contemporary techniques. The validated experimental results show that the employment of the proposed hybrid representation results in effective detection performance (an AUC of 0.9996 for INRIA person and 0.9985 for Caltech pedestrian datasets) as compared to the individual static and dynamic representations.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Disk|Crypt|Net: rethinking the stack for high-performance video streaming Conventional operating systems used for video streaming employ an in-memory disk buffer cache to mask the high latency and low throughput of disks. However, data from Netflix servers show that this cache has a low hit rate, so does little to improve throughput. Latency is not the problem it once was either, due to PCIe-attached flash storage. With memory bandwidth increasingly becoming a bottleneck for video servers, especially when end-to-end encryption is considered, we revisit the interaction between storage and networking for video streaming servers in pursuit of higher performance. We show how to build high-performance userspace network services that saturate existing hardware while serving data directly from disks, with no need for a traditional disk buffer cache. Employing netmap, and developing a new diskmap service, which provides safe high-performance userspace direct I/O access to NVMe devices, we amortize system overheads by utilizing efficient batching of outstanding I/O requests, process-to-completion, and zerocopy operation. We demonstrate how a buffer-cache-free design is not only practical, but required in order to achieve efficient use of memory bandwidth on contemporary microarchitectures. Minimizing latency between DMA and CPU access by integrating storage and TCP control loops allows many operations to access only the last-level cache rather than bottle-necking on memory bandwidth. We illustrate the power of this design by building Atlas, a video streaming web server that outperforms state-of-the-art configurations, and achieves ~72Gbps of plaintext or encrypted network traffic using a fraction of the available CPU cores on commodity hardware.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Implicit abstraction heuristics State-space search with explicit abstraction heuristics is at the state of the art of cost-optimal planning. These heuristics are inherently limited, nonetheless, because the size of the abstract space must be bounded by some, even if a very large, constant. Targeting this shortcoming, we introduce the notion of (additive) implicit abstractions, in which the planning task is abstracted by instances of tractable fragments of optimal planning. We then introduce a concrete setting of this framework, called fork-decomposition, that is based on two novel fragments of tractable cost-optimal planning. The induced admissible heuristics are then studied formally and empirically. This study testifies for the accuracy of the fork decomposition heuristics, yet our empirical evaluation also stresses the tradeoff between their accuracy and the runtime complexity of computing them. Indeed, some of the power of the explicit abstraction heuristics comes from precomputing the heuristic function offine and then determining h(s) for each evaluated state s by a very fast lookup in a "database." By contrast, while fork-decomposition heuristics can be calculated in polynomial time, computing them is far from being fast. To address this problem, we show that the time-per-node complexity bottleneck of the fork-decomposition heuristics can be successfully overcome. We demonstrate that an equivalent of the explicit abstraction notion of a "database" exists for the fork-decomposition abstractions as well, despite their exponential-size abstract spaces. We then verify empirically that heuristic search with the "databased" fork-decomposition heuristics favorably competes with the state of the art of cost-optimal planning.
Planning under continuous time and resource uncertainty: a challenge for AI We outline a class of problems, typical of Mars rover operations, that are problematic for current methods of planning under uncertainty. The existing methods fail because they suffer from one or more of the following limitations: 1) they rely on very simple models of actions and time, 2) they assume that uncertainty is manifested in discrete action outcomes, 3) they are only practical for very small problems. For many real world oroblems, these assumptions fail to hold. In particular, when planning the activities for a Mars rover, none of the above assumptions is valid: 1) actions can be concurrent and have differing durations, 2) there is uncertainty concerning action durations and consumption of continuous resources like power, and 3) typical daily plans involve on the order of a hundred actions. This class of problems may be of particular interest to the UAI community because both classical and decision-theoretic planning techniques may be useful in solving it. We describe the rover problem, discuss previous work on planning under uncertainty, and present a detailed, but very small, example illustrating some of the difficulties of finding good plans.
On the Hardness of Planning Problems with Simple Causal Graphs We present three new complexity results for classes of plan- ning problems with simple causal graphs. First, we describe a polynomial time algorithm that uses macros to generate plans for a class of planning problems with binary state variables and acyclic causal graphs. This implies that plan generation may not be intractable just because a planning problem has exponential length solution. We also prove that the problem of plan existence for planning problems with multi-valued variables and chain causal graphs is NP-hard. Finally, we show that plan existence for planning problems with binary state variables and polytree causal graphs is NP-complete.
On the complexity of planning for agent teams and its implications for single agent planning If the complexity of planning for a single agent is described by some function f of the input, how much more difficult is it to plan for a team of n cooperating agents? If these agents are completely independent, we can simply solve n single agent problems, scaling linearly with the number of agents. But if all the agents interact tightly, we really need to solve a single problem that is n times larger, which could be exponentially (in n) harder to solve. Is a more general characterization possible? To formulate this question precisely, we minimally extend the standard STRIPS model to describe multi-agent planning problems. Then, we identify two problem parameters that help us answer our question. The first parameter is independent of the precise task the multi-agent system should plan for, and it captures the structure of the possible direct interactions between the agents via the tree-width of a graph induced by the team. The second parameter is task-dependent, and it captures the minimal number of interactions by the ''most interacting'' agent in the team that is needed to solve the problem. We show that multi-agent planning problems can be solved in time exponential only in these parameters. Thus, when these parameters are bounded, the complexity scales only polynomially in the size of the agent team. These results also have direct implications for the single-agent case: by casting single-agent planning tasks as multi-agent planning tasks, we can devise novel methods for decomposition-based planning for single agents. We analyze one such method, and use the techniques developed to provide some of the strongest tractability results for classical single-agent planning to date.
Cost-Sharing Approximations for h Relaxations based on (either complete or partial) ignor- ing delete effects of the actions provide the basis for some seminal classical planning heuristics. However, the palette of the conceptual tools exploited by these heuristics remains rather limited. We study a framework for approximating the optimal cost solutions for prob- lems with no delete effects that bridges between cer- tain works on heuristic search for probabilistic reason- ing and classical planning. In particular, this framework generalizes some previously known, as well as suggests some novel, tools for heuristic estimates for Strips plan- ning.
Accuracy of admissible heuristic functions in selected planning domains The efficiency of optimal planning algorithms based on heuristic search crucially depends on the accuracy of the heuristic function used to guide the search. Often, we are interested in domain-independent heuristics for planning. In order to assess the limitations of domain-independent heuristic planning, we analyze the (in)accuracy of common domain-independent planning heuristics in the IPC benchmark domains. For a selection of these domains, we analytically investigate the accuracy of the h+ heuristic, the hm family of heuristics, and certain (additive) pattern database heuristics, compared to the perfect heuristic h*. Whereas h+and additive pattern database heuristics usually return cost estimates proportional to the true cost, non-additive hm and nonadditive pattern-database heuristics can yield results underestimating the true cost by arbitrarily large factors.
AI planning: systems and techniques
ADL: exploring the middle ground between STRIPS and the situation calculus
Symmetry Reduction for SAT Representations of Transition Systems Symmetries are inherent in systems that consist of several interchangeable objects or components. When reasoning about such systems, big computational savings can be ob- tained if the presence of symmetries is recognized. In ear- lier work, symmetries in constraint satisfaction problems have been handled by introducing symmetry-breaking con- straints. In reasoning about transition systems, notably in model-checking and reachability analysis in computer-aided verification, symmetries have been handled by symmetry re- duction algorithms that eliminate redundant search caused by symmetries. In this work, we investigate symmetry handling in a problem in the intersection of these two areas: handling symmetries in representations of transition systems in the propositional logic. The problem shows up in representations of AI plan- ning as a satisfiability problem, and in recent approaches to model-checking that represent transition systems as propo- sitional formulae. Symmetry-breaking constraints can be added to the propositional logic representation of transition sequences for removing all the symmetry at one point of time, but removing symmetry from the whole transition sequence is much more difficult, and has not been addressed in earlier work. We present a solution to the problem.
Fixed-Parameter Intractability II (Extended Abstract) We describe new results in parameterized complexity theory, including an analogue of Ladner's theorem, and natural problems concerning k-move games which are complete for parameterized problem classes that are analogues of P-space.
Deep learning via semi-supervised embedding We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.
Polynomial-time compression This paper studies the class ofinfinite sets that have minimal perfect hash functions—one-to-one onto maps between the sets and ?*-computable in polynomial time. We will call such sets P-compressible. We show that all standard NP-complete sets are P-compressible, and give a structural condition,E = ?2E, sufficient to ensure thatall infinite NP sets are P-compressible. On the other hand, we present evidence that some infinite NP sets, and indeed some infinite P sets, are not P-compressible: if an infinite NP setA is P-compressible, thenA has an infinite sparse NP subset, yet we construct a relativized world in which some infinite NP sets lack infinite sparse NP subsets. This world is built upon a result that is of interest in its own right; we determine optimally—with respect to any relativizable proof technique—the complexity of the easiest infinite sparse subsets that infinite P sets are guaranteed to have.
A Markov Decision Problem Approach to Goal Attainment A new Markov decision problem (MDP)-based method for managing goal attainment (GA), which is the process of planning and controlling actions that are related to the achievement of a set of defined goals in the presence of resource and time constraints, is proposed. Specifically, we address the problem as one of optimally selecting a sequence of actions to transform the system and/or its environment from an initial state to a desired state. We begin with a method of explicitly mapping an action-GA graph to an MDP graph and developing a dynamic programming (DP) recursion to solve the MDP problem. For larger problems having exponential complexity with respect to the number of goals, we propose guided search algorithms such as AO*, AOepsiv*, and greedy search techniques, whose search power rests on the efficiency of their heuristic evaluation functions (HEFs). Our contribution in this part stems from the introduction of a new problem-specific HEF to aid the search process. We demonstrate reductions in the computational costs of the proposed techniques through performance comparison with standard DP techniques. We conclude this paper with a method to address situations in which alternative strategies (e.g., second best) are required. The new extended AO* algorithm identifies alternative control sequences for attaining the organizational goals.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.061732
0.05613
0.020088
0.016711
0.013581
0.008391
0.00401
0.000641
0.000053
0.000002
0
0
0
0
Approximating closed fork-join queueing networks using product-form stochastic Petri-nets Product-form SPN approximation for fork-join networks with interfering requests.Conditions for accurate approximation.Validation against simulation.Evaluation of the performance of replication in NoSQL cloud datastores. Computing paradigms have shifted towards highly parallel processing and massive replication of data. This entails the efficient distribution of requests and the synchronization of results provided to users. Guaranteeing SLAs requires the ability to evaluate the performance of such systems while taking the effect of non-parallel workloads into consideration. This can be achieved with performance models that are able to represent both parallel and sequential workloads. This paper presents a product-form stochastic Petri-net approximation of fork-join queueing networks with interfering requests. We derive the necessary conditions that guarantee the accuracy of the approximations and verify this through examples in comparison to simulation. We apply these approximate models to the performance evaluation of replication in NoSQL cloud datastores and illustrate the composition of large models from smaller models, thus facilitating the ability to model a range of deployment scenarios. We show the efficiency of our solution method, which finds the product-form solution of the models without the representation of the state-space of the underlying CTMC.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Sentiment Classification Of Movie Reviews Using Contextual Valence Shifters We present two methods for determining the sentiment expressed by a movie review. The semantic orientation of a review can be positive, negative, or neutral. We examine the effect of valence shifters on classifying the reviews. We examine three types of valence shifters: negations, intensifiers, and diminishers. Negations are used to reverse the semantic polarity of a particular term, while intensifiers and diminishers are used to increase and decrease, respectively, the degree to which a term is positive or negative. The first method classifies reviews based on the number of positive and negative terms they contain. We use the General Inquirer to identify positive and negative terms, as well as negation terms, intensifiers, and diminishers. We also use positive and negative terms from other sources, including a dictionary of synonym differences and a very large Web corpus. To compute corpus-based semantic orientation values of terms, we use their association scores with a small group of positive and negative terms. We show that extending the term-counting method with contextual valence shifters improves the accuracy of the classification. The second method uses a Machine Learning algorithm, Support Vector Machines. We start with unigram features and then add bigrams that consist of a valence shifter and another word. The accuracy of classification is very high, and the valence shifter bigrams slightly improve it. The features that contribute to the high accuracy are the words in the lists of positive and negative terms. Previous work focused on either the term-counting method or the Machine Learning method. We show that combining the two methods achieves better results than either method alone.
Introduction to the special issue on summarization As the amount of on-line information increases, systems that can automatically sum-marize one or more documents become increasingly desirable. Recent research has investigated types of summaries, methods to create them, and methods to evaluate them. Several evaluation competitions (in the style of the National Institute of Stan-dards and Technologyís [NISTís] Text Retrieval Conference [TREC]) have helped de-termine baseline performance levels and provide a limited set of training material. Frequent workshops and symposia reflect the ongoing interest of researchers around the world. The volume of papers edited by Mani and Maybury (1999) and a book (Mani 2001) provide good introductions to the state of the art in this rapidly evolving subfield. A summary can be loosely defined as a text that is produced from one or more texts, that conveys important information in the original text(s), and that is no longer than half of the original text(s) and usually significantly less than that. Text here is used rather loosely and can refer to speech, multimedia documents, hypertext, etc. The main goal of a summary is to present the main ideas in a document in less space. If all sentences in a text document were of equal importance, producing a sum-mary would not be very effective, as any reduction in the size of a document would carry a proportional decrease in its informativeness. Luckily, information content in a document appears in bursts, and one can therefore distinguish between more and less informative segments. Identifying the informative segments at the expense of the rest is the main challenge in summarization. Of the many types of summary that have been identified (Borko and Bernier 1975; Cremmins 1996; Sparck Jones 1999; Hovy and Lin 1999), indicative summaries provide an idea of what the text is about without conveying specific content, and informative ones provide some shortened version of the content. Topic-oriented summaries con-centrate on the readerís desired topic(s) of interest, whereas generic summaries reflect the authorís point of view. Extracts are summaries created by reusing portions (words, sentences, etc. ) of the input text verbatim, while abstracts are created by regenerating
SimpleNLG: a realisation engine for practical applications This paper describes SimpleNLG, a realisation engine for English which aims to provide simple and robust interfaces to generate syntactic structures and linearise them. The library is also flexible in allowing the use of mixed (canned and non-canned) representations.
Movie Review Mining: a Comparison between Supervised and Unsupervised Classification Approaches Web content mining is intended to help people discover valuable information from large amount of unstructured data on the web. Movie review mining classifies movie reviews into two polarities: positive and negative. As a type of sentiment-based classification, movie review mining is different from other topic-based classifications. Few empirical studies have been conducted in this domain. This paper investigates movie review mining using two approaches: machine learning and semantic orientation. The approaches are adapted to movie review domain for comparison. The results show that our results are comparable to or even better than previous findings. We also find that movie review mining is a more challenging application than many other types of review mining. The challenges of movie review mining lie in that factual information is always mixed with real-life review data and ironic words are used in writing movie reviews. Future work for improving existing approaches is also suggested.
Features for audio and music classification Four audio feature sets are evaluated in their ability to classify five general audio classes and seven pop- ular music genres. The feature sets include low-level signal properties, mel-frequency spectral coefficients, and two new sets based on perceptual models of hear- ing. The temporal behavior of the features is ana- lyzed and parameterized and these parameters are in- cluded as additional features. Using a standard Gaus- sian framework for classification, results show that the temporal behavior of features is important for both music and audio classification. In addition, classifica- tion is better, on average, if based on features from models of auditory perception rather than on standard features.
Playscript Classification and Automatic Wikipedia Play Articles Generation In this work, we aim to create Wikipedia pages on plays automatically by extracting relevant information from various web sources. Our approach involves building an efficient classifier that can classify web documents as play scripts. From the set of correctly classified instances of play scripts, we extract relevant play-related information from the documents and use it to obtain additional information from various sources on the web. This information is aggregated and human-readable Wikipedia pages are created using a bot. The results of our experiments show that classifiers trained by combining our designed features along with \"bag-of-words\" (bow) features outperform classifiers trained using only bow features. Our approach further shows that good quality human-readable pages can be created using our bot. Such automatic page generation process can eventually ensure a more complete Wikipedia.
Learning Semantic Representations for the Phrase Translation Model. This paper presents a novel semantic-based phrase translation model. A pair of source and target phrases are projected into continuous-valued vector representations in a low-dimensional latent semantic space, where their translation score is computed by the distance between the pair in this new space. The projection is performed by a multi-layer neural network whose weights are learned on parallel training data. The learning is aimed to directly optimize the quality of end-to-end machine translation results. Experimental evaluation has been performed on two Europarl translation tasks, English-French and German-English. The results show that the new semantic-based phrase translation model significantly improves the performance of a state-of-the-art phrase-based statistical machine translation sys-tem, leading to a gain of 0.7-1.0 BLEU points.
Tensor Deep Stacking Networks A novel deep architecture, the tensor deep stacking network (T-DSN), is presented. The T-DSN consists of multiple, stacked blocks, where each block contains a bilinear mapping from two hidden layers to the output layer, using a weight tensor to incorporate higher order statistics of the hidden binary (½0; 1) features. A learning algorithm for the T-DSN’s weight matrices and tensors is developed and described in which the main parameter estimation burden is shifted to a convex subproblem with a closed-form solution. Using an efficient and scalable parallel implementation for CPU clusters, we train sets of T-DSNs in three popular tasks in increasing order of the data size: handwritten digit recognition using MNIST (60k), isolated state/phone classification and continuous phone recognition using TIMIT (1.1 m), and isolated phone classification using WSJ0 (5.2 m). Experimental results in all three tasks demonstrate the effectiveness of the T-DSN and the associated learning methods in a consistent manner. In particular, a sufficient depth of the T-DSN, a symmetry in the two hidden layers structure in each T-DSN block, our model parameter learning algorithm, and a softmax layer on top of T-DSN are shown to have all contributed to the low error rates observed in the experiments for all three tasks.
Learning nonlinear overcomplete representations for efficient coding We derive a learning algorithm for inferring an overcomplete basisby viewing it as probabilistic model of the observed data. Overcompletebases allow for better approximation of the underlyingstatistical density. Using a Laplacian prior on the basis coefficientsremoves redundancy and leads to representations that are sparseand are a nonlinear function of the data. This can be viewed asa generalization of the technique of independent component analysisand provides a method for blind ...
Equilibria and steering laws for planar formations This paper presents a Lie group setting for the problem of control of formations, as a natural outcome of the analysis of a planar two-vehicle formation control law. The vehicle trajectories are described using the planar Frenet–Serret equations of motion, which capture the evolution of both the vehicle position and orientation for unit-speed motion subject to curvature (steering) control. The set of all possible (relative) equilibria for arbitrary G-invariant curvature controls is described (where G=SE(2) is a symmetry group for the control law), and a global convergence result for the two-vehicle control law is proved. An n-vehicle generalization of the two-vehicle control law is also presented, and the corresponding (relative) equilibria for the n-vehicle problem are characterized. Work is on-going to discover stability and convergence results for the n-vehicle problem.
SafetyNet: improving the availability of shared memory multiprocessors with global checkpoint/recovery We develop an availability solution, called SafetyNet, that uses a unified, lightweight checkpoint/recovery mechanism to support multiple long-latency fault detection schemes. At an abstract level, SafetyNet logically maintains multiple, globally consistent checkpoints of the state of a shared memory multiprocessor (i.e., processors, memory, and coherence permissions), and it recovers to a pre-fault checkpoint of the system and re-executes if a fault is detected. SafetyNet efficiently coordinates checkpoints across the system in logical time and uses "logically atomic" coherence transactions to free checkpoints of transient coherence state. SafetyNet minimizes performance overhead by pipelining checkpoint validation with subsequent parallel execution.We illustrate SafetyNet avoiding system crashes due to either dropped coherence messages or the loss of an interconnection network switch (and its buffered messages). Using full-system simulation of a 16-way multiprocessor running commercial workloads, we find that SafetyNet (a) adds statistically insignificant runtime overhead in the common-case of fault-free execution, and (b) avoids a crash when tolerated faults occur.
File system aging—increasing the relevance of file system benchmarks Benchmarks are important because they provide a means for users and researchers to characterize how their workloads will perform on different systems and different system architectures. The field of file system design is no different from other areas of research in this regard, and a variety of file system benchmarks are in use, representing a wide range of the different user workloads that may be run on a file system. A realistic benchmark, however, is only one of the tools that is required in order to understand how a file system design will perform in the real world. The benchmark must also be executed on a realistic file system. While the simplest approach may be to measure the performance of an empty file system, this represents a state that is seldom encountered by real users. In order to study file systems in more representative conditions, we present a methodology for aging a test file system by replaying a workload similar to that experienced by a real file system over a period of many months, or even years. Our aging tools allow the same aging workload to be applied to multiple versions of the same file system, allowing scientific evaluation of the relative merits of competing file system designs.In addition to describing our aging tools, we demonstrate their use by applying them to evaluate two enhancements to the file layout policies of the UNIX fast file system.
Relating equivalence and reducibility to sparse sets For various polynomial-time reducibilities r, the authors ask whether being r-reducible to a sparse set is a broader notion than being r-equivalent to a sparse set. Although distinguishing equivalence and reducibility to sparse sets, for many-one or 1-truth-table reductions, would imply that P≠NP, the authors show that for k-truth-table reductions, k⩾2, equivalence and reducibility to sparse sets provably differ. Though R. Gavalda and D. Watanabe have shown that, for any polynomial-time computable unbounded function f(·), some sets f(n)-truth-table reducible to sparse sets are not even Turing equivalent to sparse sets, the authors show that extending their result to the 2-truth-table case would provide a proof that P≠NP. Additionally, the authors study the relative power of different notions of reducibility and show that disjunctive and conjunctive truth-table reductions to sparse sets are surprisingly powerful, refuting a conjecture of K. Ko (1989)
Anatomical Structure Sketcher For Cephalograms By Bimodal Deep Learning The lateral cephalogram is a commonly used medium to acquire patient-specific morphology for diagnose and treatment planning in clinical dentistry. The robust anatomical structure detection and accurate annotation remain challenging considering the personal skeletal variations and image blurs caused by device-specific projection magnification, together with structure overlapping in the lateral cephalograms. We propose a novel cephalogram sketcher system, where the contour extraction of anatomical structures is formulated as a cross-modal morphology transfer from regular image patches to arbitrary curves. Specifically, the image patches of structures of interest are located by a hierarchical pictorial model. The automatic contour sketcher converts the image patch to a morphable boundary curve via a bimodal deep Boltzmann machine. The deep machine learns a joint representation of patch textures and contours, and forms a path from one modality (patches) to the other (contours). Thus, the sketcher can infer the contours by alternating Gibbs sampling along the path in a manner similar to the data completion. The proposed method is robust not only to structure detection, but also tends to produce accurate structure shapes and landmarks even in blurry X-ray images. The experiments performed on clinically captured cephalograms demonstrate the effectiveness of our method.
1.1139
0.1078
0.1078
0.1078
0.0539
0.004778
0.00029
0.000031
0.000003
0
0
0
0
0
The Radon Cumulative Distribution Transform and Its Application to Image Classification Invertible image representation methods (transforms) are routinely employed as low-level image processing operations based on which feature extraction and recognition algorithms are developed. Most transforms in current use (e.g., Fourier, wavelet, and so on) are linear transforms and, by themselves, are unable to substantially simplify the representation of image classes for classification. Here, we describe a nonlinear, invertible, low-level image processing transform based on combining the well-known Radon transform for image data, and the 1D cumulative distribution transform proposed earlier. We describe a few of the properties of this new transform, and with both theoretical and experimental results show that it can often render certain problems linearly separable in a transform space.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Neural decoding with hierarchical generative models Recent research has shown that reconstruction of perceived images based on hemodynamic response as measured with functional magnetic resonance imaging (fMRI) is starting to become feasible. In this letter, we explore reconstruction based on a learned hierarchy of features by employing a hierarchical generative model that consists of conditional restricted Boltzmann machines. In an unsupervised phase, we learn a hierarchy of features from data, and in a supervised phase, we learn how brain activity predicts the states of those features. Reconstruction is achieved by sampling from the model, conditioned on brain activity. We show that by using the hierarchical generative model, we can obtain good-quality reconstructions of visual images of handwritten digits presented during an fMRI scanning session.
FingerNet: Deep learning-based robust finger joint detection from radiographs Radiographic image assessment is the most common method used to measure physical maturity and diagnose growth disorders, hereditary diseases and rheumatoid arthritis, with hand radiography being one of the most frequently used techniques due to its simplicity and minimal exposure to radiation. Finger joints are considered as especially important factors in hand skeleton examination. Although several automation methods for finger joint detection have been proposed, low accuracy and reliability are hindering full-scale adoption into clinical fields. In this paper, we propose FingerNet, a novel approach for the detection of all finger joints from hand radiograph images based on convolutional neural networks, which requires little user intervention. The system achieved 98.02 % average detection accuracy for 130 test data sets containing over 1,950 joints. Further analysis was performed to verify the system robustness against factors such as epiphysis and metaphysis in different age groups.
De novo identification of replication-timing domains in the human genome by deep learning. Motivation: The de novo identification of the initiation and termination zones-regions that replicate earlier or later than their upstream and downstream neighbours, respectively-remains a key challenge in DNA replication. Results: Building on advances in deep learning, we developed a novel hybrid architecture combining a pre-trained, deep neural network and a hidden Markov model (DNN-HMM) for the de novo identification of replication domains using replication timing profiles. Our results demonstrate that DNN-HMM can significantly outperform strong, discriminatively trained Gaussian mixture model-HMM (GMM-HMM) systems and other six reported methods that can be applied to this challenge. We applied our trained DNN-HMM to identify distinct replication domain types, namely the early replication domain (ERD), the down transition zone (DTZ), the late replication domain (LRD) and the up transition zone (UTZ), using newly replicated DNA sequencing (Repli-Seq) data across 15 human cells. A subsequent integrative analysis revealed that these replication domains harbour unique genomic and epigenetic patterns, transcriptional activity and higher-order chromosomal structure. Our findings support the 'replication-domain' model, which states (1) that ERDs and LRDs, connected by UTZs and DTZs, are spatially compartmentalized structural and functional units of higher-order chromosomal structure, (2) that the adjacent DTZ-UTZ pairs form chromatin loops and (3) that intra-interactions within ERDs and LRDs tend to be short-range and long-range, respectively. Our model reveals an important chromatin organizational principle of the human genome and represents a critical step towards understanding the mechanisms regulating replication timing.
A Novel Semi-Supervised Deep Learning Framework for Affective State Recognition on EEG Signals Nowadays the rapid development in the area of human-computer interaction has given birth to a growing interest on detecting different affective states through smart devices. By using the modern sensor equipment, we can easily collect electroencephalogram (EEG) signals, which capture the information from central nervous system and are closely related with our brain activities. Through the training on EEG signals, we can make reasonable analysis on people's affection, which is very promising in various areas. Unfortunately, the special properties of EEG dataset have brought difficulties for conventional machine learning methods. The main reasons lie in two aspects: the small set of labeled samples and the noisy channel problem. To overcome these difficulties and successfully identify the affective states, we come up with a novel semi-supervised deep structured framework. Compared with previous deep learning models, our method is more adapted to the EEG classification problem. We first adopt a two-level procedure, which involves both supervised label information and unsupervised structure information to jointly make decision on channel selection. And then, we add a generative Restricted Boltzmann Machine (RBM) model for the classification task, and use the training objectives of generative learning and unsupervised learning to jointly regularize the discriminative training. Finally, we extend it to the active learning scenario, which solves the costly labeling problem. The experiments conducted on real EEG dataset have shown both the convincing result on critical channel selection and the superiority of our method over multiple baselines for the affective state recognition.
An up-to-date comparison of state-of-the-art classification algorithms. Up-to-date report on the accuracy and efficiency of state-of-the-art classifiers.We compare the accuracy of 11 classification algorithms pairwise and groupwise.We examine separately the training, parameter-tuning, and testing time.GBDT and Random Forests yield highest accuracy, outperforming SVM.GBDT is the fastest in testing, Naive Bayes the fastest in training. Current benchmark reports of classification algorithms generally concern common classifiers and their variants but do not include many algorithms that have been introduced in recent years. Moreover, important properties such as the dependency on number of classes and features and CPU running time are typically not examined. In this paper, we carry out a comparative empirical study on both established classifiers and more recently proposed ones on 71 data sets originating from different domains, publicly available at UCI and KEEL repositories. The list of 11 algorithms studied includes Extreme Learning Machine (ELM), Sparse Representation based Classification (SRC), and Deep Learning (DL), which have not been thoroughly investigated in existing comparative studies. It is found that Stochastic Gradient Boosting Trees (GBDT) matches or exceeds the prediction performance of Support Vector Machines (SVM) and Random Forests (RF), while being the fastest algorithm in terms of prediction efficiency. ELM also yields good accuracy results, ranking in the top-5, alongside GBDT, RF, SVM, and C4.5 but this performance varies widely across all data sets. Unsurprisingly, top accuracy performers have average or slow training time efficiency. DL is the worst performer in terms of accuracy but second fastest in prediction efficiency. SRC shows good accuracy performance but it is the slowest classifier in both training and testing.
Predictive State Recurrent Neural Networks. We present a new model, Predictive State Recurrent Neural Networks (PSRNNs), for filtering and prediction in dynamical systems. PSRNNs draw on insights from both Recurrent Neural Networks (RNNs) and Predictive State Representations (PSRs), and inherit advantages from both types of models. Like many successful RNN architectures, PSRNNs use (potentially deeply composed) bilinear transfer functions to combine information from multiple sources. We show that such bilinear functions arise naturally from state updates in Bayes filters like PSRs, in which observations can be viewed as gating belief states. We also show that PSRNNs can be learned effectively by combining Backpropogation Through Time (BPTT) with an initialization derived from a statistically consistent learning algorithm for PSRs called two-stage regression (2SR). Finally, we show that PSRNNs can be factorized using tensor decomposition, reducing model size and suggesting interesting connections to existing multiplicative architectures such as LSTMs and GRUs. We apply PSRNNs to 4 datasets, and show that we outperform several popular alternative approaches to modeling dynamical systems in all cases.
Links between perceptrons, MLPs and SVMs We propose to study links between three important classification algorithms: Perceptrons, Multi-Layer Perceptrons (MLPs) and Support Vector Machines (SVMs). We first study ways to control the capacity of Perceptrons (mainly regularization parameters and early stopping), using the margin idea introduced with SVMs. After showing that under simple conditions a Perceptron is equivalent to an SVM, we show it can be computationally expensive in time to train an SVM (and thus a Perceptron) with stochastic gradient descent, mainly because of the margin maximization term in the cost function. We then show that if we remove this margin maximization term, the learning rate or the use of early stopping can still control the margin. These ideas are extended afterward to the case of MLPs. Moreover, under some assumptions it also appears that MLPs are a kind of mixture of SVMs, maximizing the margin in the hidden layer space. Finally, we present a very simple MLP based on the previous findings, which yields better performances in generalization and speed than the other models.
Greedy Layer-Wise Training of Deep Networks Deep multi-layer neural networks have many levels of non-linearities, which allows them to potentially represent very compactly highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task.
Energy-based models for sparse overcomplete representations We present a new way of extending independent components analysis(ICA) to overcomplete representations. In contrast to the causalgenerative extensions of ICA which maintain marginal independenceof sources, we define features as deterministic(linear) functions of the inputs. This assumption results inmarginal dependencies among the features, but conditionalindependence of the features given the inputs. By assigningenergies to the features a probability distribution over the inputstates is defined through the Boltzmann distribution. Freeparameters of this model are trained using the contrastivedivergence objective (Hinton, 2002). When the number of features isequal to the number of input dimensions this energy-based modelreduces to noiseless ICA and we show experimentally that theproposed learning algorithm is able to perform blind sourceseparation on speech data. In additional experiments we trainovercomplete energy-based models to extract features from variousstandard data-sets containing speech, natural images, hand-writtendigits and faces.
Learning Invariant Color Features With Sparse Topographic Restricted Boltzmann Machines Our objective is to learn invariant color features directly from data via unsupervised learning. In this paper, we introduce a method to regularize restricted Boltzmann machines during training to obtain features that are sparse and topographically organized. Upon analysis, the features learned are Gabor-like and demonstrate a coding of orientation, spatial position, frequency and color that vary smoothly with the topography of the feature map. There is also differentiation between monochrome and color filters, with some exhibiting color-opponent properties. We also found that the learned representation is more invariant to affine image transformations and changes in illumination color.
The complexity of acyclic conjunctive queries This paper deals with the evaluation of acyclic Booleanconjunctive queries in relational databases. By well-known resultsof Yannakakis[1981], this problem is solvable in polynomial time;its precise complexity, however, has not been pinpointed so far. Weshow that the problem of evaluating acyclic Boolean conjunctivequeries is complete for LOGCFL, the class of decision problems thatare logspace-reducible to a context-free language. Since LOGCFL iscontained in AC1 and NC2, the evaluation problem of acyclic Booleanconjunctive queries is highly parallelizable. We present a paralleldatabase algorithm solving this problem with alogarithmic number ofparallel join operations. The algorithm is generalized to computingthe output of relevant classes of non-Boolean queries. We also showthat the acyclic versions of the following well-known database andAI problems are all LOGCFL-complete: The Query Output Tuple problemfor conjunctive queries, Conjunctive Query Containment, ClauseSubsumption, and Constraint Satisfaction. The LOGCFL-completenessresult is extended to the class of queries of bounded tree widthand to other relevant query classes which are more general than theacyclic queries.
I/O reference behavior of production database workloads and the TPC benchmarks—an analysis at the logical level As improvements in processor performance continue to far outpace improvements in storage performance, I/O is increasingly the bottleneck in computer systems, especially in large database systems that manage huge amoungs of data. The key to achieving good I/O performance is to thoroughly understand its characteristics. In this article we present a comprehensive analysis of the logical I/O reference behavior of the peak productiondatabase workloads from ten of the world's largest corporations. In particular, we focus on how these workloads respond to different techniques for caching, prefetching, and write buffering. Our findings include several broadly applicable rules of thumb that describe how effective the various I/O optimization techniques are for the production workloads. For instance, our results indicate that the buffer pool miss ratio tends to be related to the ratio of buffer pool size to data size by an inverse square root rule. A similar fourth root rule relates the write miss ratio and the ration of buffer pool size to data size.In addition, we characterize the reference characteristics of workloads similar to the Transaction Processing Performance Council (TPC) benchmarks C (TPC-C) and D(TPC-D), which are de facto standard performance measures for online transaction processing (OLTP) systems and decision support systems (DSS), respectively. Since benchmarks such as TPC-C and TPC-D can only be used effectively if their strengths and limitations are understood, a major focus of our analysis is to identify aspects of the benchmarks that stress the system differently than the production workloads. We discover that for the most part, the reference behavior of TPC-C and TPC-D fall within the range of behavior exhibited by the production workloads. However, there are some noteworthy exceptions that affect well-known I/O optimization techniques such as caching (LRU is further from the optimal for TPC-C, while there is little sharing of pages between transactions for TPC-D), prefetching (TPC-C exhibits no significant sequentiality), and write buffering (write buffering is lees effective for the TPC benchmarks). While the two TPC benchmarks generally complement one another in reflecting the characteristics of the production workloads, there remain aspects of the real workloads that are not represented by either of the benchmarks.
Automatic Derivation and Application of Induction Schemes for Mutually Recursive Functions This paper advocates and explores the use of multipredicate induction schemes for proofs about mutually recursive functions. The interactive application of multi-predicate schemes stemming from datatype definitions is already well-established practice; this paper describes an automated proof procedure based on multi-predicate schemes. Multipredicate schemes may be formally derived from (mutually recursive) function definitions; such schemes are often helpful in proving properties of mutually recursive functions where the recursion pattern does not follow that of the underlying datatypes. These ideas have been implemented using the HOL theorem prover and the Clam proof planner.
HIFCF: An effective hybrid model between picture fuzzy clustering and intuitionistic fuzzy recommender systems for medical diagnosis. •We focused on improving the quality of medical diagnosis.•A hybrid model between picture fuzzy clustering and recommender systems was shown.•It added the cluster information of patients into the new similarity degree.•It was experimentally validated on the benchmark dataset of UCI Machine Learning.•It has better accuracy than other relevant algorithms.
1.205759
0.10288
0.070491
0.026577
0.005714
0.001429
0.000479
0.00009
0.000006
0
0
0
0
0
OSF/1 Virtual Memory Improvements
Transforming policies into mechanisms with infokernel We describe an evolutionary path that allows operating systems to be used in a more flexible and appropriate manner by higher-level services. An infokernel exposes key pieces of information about its algorithms and internal state; thus, its default policies become mechanisms, which can be controlled from user-level. We have implemented two prototype infokernels based on the linuxtwofour and netbsdver kernels, called infolinux and infobsd, respectively. The infokernels export key abstractions as well as basic information primitives. Using infolinux, we have implemented four case studies showing that policies within Linux can be manipulated outside of the kernel. Specifically, we show that the default file cache replacement algorithm, file layout policy, disk scheduling algorithm, and TCP congestion control algorithm can each be turned into base mechanisms. For each case study, we have found that infokernel abstractions can be implemented with little code and that the overhead and accuracy of synthesizing policies at user-level is acceptable.
On-line file caching Consider the following file caching problem: in response to a sequence of requests for files, where each file has a specified size and retrieval cost, maintain a cache of files of total size at most some specified k so as to minimize the total retrieval cost. Specifically, when a requested file is not in the cache, bring it into the cache, pay the retrieval cost, and choose files to remove from the cache so that the total size of files in the cache is at most k. This problem generalizes previous paging and caching problems by allowing objects of arbitrary size and cost, both important attributes when caching files for world-wide-web browsers, servers, and proxies. We give a simple deterministic on-line algorithm that generalizes many well-known paging and weighted-caching strategies, including least-recently-used, first-in-first-out, flush-when-full, and the balance algorithm. On any request sequence, the total cost incurred by the algorithm is at most k/(k-h+1) times the minimum possible using a cache of size h = k. For any algorithm satisfying the latter bound, we show it is also the case that for most choices of k, the retrieval cost is either insignificant or the competitive ratio is constant. This helps explain why competitive ratios of many on-line paging algorithms have been typically observed to be constant in practice.
Storage-Aware Caching: Revisiting Caching for Heterogeneous Storage Systems Modern storage environments are composed of a variety of devices with different performance characteristics. In this paper we explore storage-aware caching algorithms, in which the file buffer replacement algorithm explicitly accounts for differences in performance across devices. We introduce a new family of storage-aware caching algorithms that partition the cache, with one partition per device. The algorithms set the partition sizes dynamically to balance work across the devices. Through simulation, we show that our storage-aware policies perform similarly to LANDLORD, a cost-aware algorithm previously shown to perform well in Web caching environments. We also demonstrate that partitions can be easily incorporated into the Clock replacement algorithm, thus increasing the likelihood of deploying cost-aware algorithms in modern operating systems.
PB-LRU: a self-tuning power aware storage cache replacement algorithm for conserving disk energy Energy consumption is an important concern at data centers, where storage systems consume a significant fraction of the total energy. A recent study proposed power-aware storage cache management to provide more opportunities for the underlying disk power management scheme to save energy. However, the on-line algorithm proposed in that study requires cumbersome parameter tuning for each workload and is therefore difficult to apply to real systems.This paper presents a new power-aware on-line algorithm called PB-LRU (Partition-Based LRU) that requires little parameter tuning. Our results with both real system and synthetic workloads show that PB-LRU without any parameter tuning provides similar or even better performance and energy savings than the previous power-aware algorithm with the best parameter setting for each workload.
Zoned-RAID for multimedia database servers This paper proposes a novel fault-tolerant disk subsystem named Zoned-RAID (Z-RAID). Z-RAID improves the performance of traditional RAID system by utilizing the zoning property of modern disks which provides multiple zones with different data transfer rates in a disk. This study proposes to optimize data transfer rate of RAID system by constraining placement of data blocks in multi-zone disks. We apply Z-RAID for multimedia database servers such as video servers that require a high data transfer rate as well as fault tolerance. Our analytical and experimental results demonstrate the superiority of Z-RAID to conventional RAID. Z-RAID provides a higher effective data transfer rate in normal mode with no disadvantage. In the presence of a disk failure, Z-RAID still performs as well as RAID.
Track-Aligned Extents: Matching Access Patterns to Disk Drive Characteristics Track-aligned extents (traxtents) utilize disk-specific knowledge to match access patterns to the strengths of modern disks. By allocating and accessing related data on disk track boundaries, a system can avoid most rotational latency and track crossing overheads. Avoiding these overheads can increase disk access efficiency by up to 50% for mid-sized requests (100-500KB). This paper describes traxtents, algorithms for detecting track boundaries, and some uses of traxtents in file systems and video servers. For large-file workloads, a version of FreeBSD's FFS implementation that exploits traxtents reduces application run times by up to 20% compared to the original version. A video server using traxtent-based requests can support 56% more concurrent streams at the same startup latency and buffer space. For LFS, 44% lower overall write cost for track-sized segments can be achieved.
Fault tolerant design of multimedia servers Recent technological advances have made multimedia on-demand servers feasible. Two challenging tasks in such systems are: a) satisfying the real-time requirement for continuous delivery of objects at specified bandwidths and b) efficiently servicing multiple clients simultaneously. To accomplish these tasks and realize economies of scale associated with servicing a large user population, the multimedia server can require a large disk subsystem. Although a single disk is fairly reliable, a large disk farm can have an unacceptably high probability of disk failure. Further, due to the real-time constraint, the reliability and availability requirements of multimedia systems are very stringent. In this paper we investigate techniques for providing a high degree of reliability and availability, at low disk storage, bandwidth, and memory costs for on-demand multimedia servers.
Adaptive block rearrangement An adaptive technique for reducing disk seek times is described. The technique copies frequently referenced blocks from their original locations to reserved space near the middle of the disk. Reference frequencies need not be known in advance. Instead, they are estimated by monitoring the stream of arriving requests. Trace-driven simulations show that seek times can be cut substantially by copying only a small number of blocks using this technique. The technique has been implemented by modifying a UNIX device driver. No modifications are required to the file system that uses the driver.
Mining Sequential Patterns: Generalizations and Performance Improvements
The BOSS-System: Coupling Visual Programming with Model Based Interface Design Due to the limitations of WYSIWYG User Interface Builders and User Interface ManagementSystems model based user interface construction tools gain rising researchinterest. The paper describes the BOSS system, a model based tool which employsan encompassing specification model (HIT, Hierarchic Interaction graph Templates)for setting up all parts of the model of an interactive application (application interface,user interaction task space, presentation design rules) in a declarative,...
Representing actions in logic programs and default theories a situation calculus approach We address the problem of representing common sense knowledge about action domains in the formalisms of logic programming and default logic. We employ a methodology proposed by Gelfond and Lifschitz which involves first defining a high-level language for representing knowledge about actions, and then specifying a translation from the high-level action language into a general-purpose formalism, such as logic programming. Accordingly, we define a high-level action languageAE, and specify sound and complete translations of portions ofAEinto logic programming and default logic. The languageAEincludes propositions that represent “static causal laws” of the following kind: a fluent formula ψ can be made true by making a fluent formula true (or, more precisely, ψ is caused whenever is caused). Such propositions are more expressive than the state constraints traditionally used to represent background knowledge. Our translations ofAEdomain descriptions into logic programming and default logic are simple, in part because the noncontrapositive nature of causal laws is easily reflected in such rule-based formalisms.
Pruning Conformant Plans by Counting Models on Compiled d-DNNF Representations Optimal planners in the classical setting are built around two notions: branching and pruning. SAT-based planners for ex- ample branch by trying the values of a selected variable, and prune by propagating constraints and checking consistency. In the conformant setting, a similar branching scheme can be used if restricted to action variables, but the pruning scheme must be modified. Indeed, pruning branches that encode in- consistent partial plans is not sufficient since a partial plan may be consistent and complete (covering all the action vari- ables) and still fail to be a conformant plan. This happens indeed when the plan does not conform to some possible ini- tial state or transition. A remedy to this problem is to use a criterion stronger than consistency for pruning. This is actu- ally what we do in this paper where the consistency-based pruning criterion used in classical planning is replaced by a validity-based criterion suitable for conformant planning. Under the assumption that actions are deterministic, a partial plan can be defined as valid when it is logically consistent with the theory and each possible initial state. A valid partial plan that is complete is guaranteed to encode a conformant plan, and vice versa. Checking validity, however, while use- ful for pruning can be very expensive. We show then that such validity checks can be performed in linear time pro- vided that the theory encoding the problem is transformed into a logically equivalent theory in deterministic decompos- able negation normal form (d-DNNF). In d-DNNF, plan va- lidity checks can be reduced to two linear-time operations: projection (finding the strongest consequence of a formula over some of its variables) and model counting (finding the number of satisfying assignments). We then define and eval- uate a conformant planner that branches on action variables, and prunes invalid partial plans in linear time. The empiri- cal results are encouraging, showing the potential benefits of stronger forms of inference in planning tasks that are not re- ducible to SAT.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.078321
0.052404
0.037619
0.020171
0.011444
0.004448
0.001573
0.00019
0.000076
0.000023
0.000001
0
0
0
Levenshtein in Blocks World: String Matching via AI Planning. We provide in this paper an encoding which converts the string matching problems into planning problems in Artificial Intelligence. As an example use of the encoding, Levenshtein distance for measuring similarity between two strings particularly is to be calculated through searching for a feasible plan in shortest length from its initial state to the goal state. The research has its origin in Blocks World, a benchmark domain for studying the theory and application of AI planning. Connecting with AI planning in our belief not only creates promising opportunities in development of new, knowledge-rich heuristics, but also enables hands-on use of existing high-performance AI planners or reasoners, for string matching.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0