Query Text
stringlengths
9
8.71k
Ranking 1
stringlengths
14
5.31k
Ranking 2
stringlengths
11
5.31k
Ranking 3
stringlengths
11
8.42k
Ranking 4
stringlengths
17
8.71k
Ranking 5
stringlengths
14
4.95k
Ranking 6
stringlengths
14
8.42k
Ranking 7
stringlengths
17
8.42k
Ranking 8
stringlengths
10
5.31k
Ranking 9
stringlengths
9
8.42k
Ranking 10
stringlengths
9
8.42k
Ranking 11
stringlengths
10
4.11k
Ranking 12
stringlengths
14
8.33k
Ranking 13
stringlengths
17
3.82k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.1
score_8
float64
0
0.02
score_9
float64
0
0
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
A comparison of high-availability media recovery techniques We compare two high-availability techniques for recovery from media failures in database systems. Both techniques achieve high availability by having two copies of all data and indexes, so that recovery is immediate. “Mirrored declustering” spreads two copies of each relation across two identical sets of disks. “Interleaved declustering” spreads two copies of each relation across one set of disks while keeping both copies of each tuple on separate disks. Both techniques pay the same costs of doubling storage requirements and requiring updates to be applied to both copies.Mirroring offers greater simplicity and universality. Recovery can be implemented at lower levels of the system software (e.g., the disk controller). For architectures that do not share disks globally, it allows global and local cluster indexes to be independent. Also, mirroring does not require data to be declustered (i.e., spread over multiple disks).Interleaved declustering offers significant improvements in recovery time, mean time to loss of both copies of some data, throughput during normal operation, and response time during recovery. For all architectures, interleaved declustering enables data to be spread over twice as many disks for improved load balancing. We show how tuning for interleaved declustering is simplified because it is dependent only on a few parameters that are usually well known for a specific workload and system configuration.
Random duplicate storage strategies for load balancing in multimedia servers An important issue in multimedia servers is disk load balancing. In this paper we use randomization and data redundancy to enable good load balancing. We focus on duplicate storage strategies, i.e., each data block is stored twice. This means that a request for a block can be serviced by two disks. A consequence of such a storage strategy is that we have to decide for each block which disk to use for its retrieval. This results in a so-called retrieval selection problem. We describe a graph model for duplicate storage strategies and derive polynomial time optimization algorithms for the retrieval selection problems of several storage strategies. Our model unifies and generalizes chained declustering and random duplicate assignment strategies. Simu- lation results and a probabilistic analysis complete this paper.
Random duplicated assignment: an alternative to striping in video servers An approach is presented for storing video data in large disk arrays. Video data is stored by assigning a number of copies of each data block to different, randomly chosen disks, where the number of copies may depend on the popularity of the corresponding video data. The approach offers an interesting alternative to the well-known striping techniques. Its use results in smaller response times and lower disk and RAM costs if many continuous variable-rate data streams have to be sustained simultaneously. It also offers some practical advantages relating to reliability and extendability.Based on this storage approach, three retrieval algorithms are presented that determine, for a given batch of data blocks, from which disk each of the data blocks should be retrieved. The performance of these algorithms is evaluated from an average-case as well as a worst-case perspective.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Shifted declustering: a placement-ideal layout scheme for multi-way replication storage architecture Recent years have seen a growing interest in the deployment of sophisticated replication based storage architecture in data-intensive computing. Existing placement-ideal data layout solutions place an emphasis on declustered parity based storage. However, there exist major differences between parity and replication architectures, especially in data layouts. We retrofit the desirable properties of optimal parallelism in parity architectures for replication architectures, and propose a novel placement-ideal data layout ---- shifted declustering for replication based storage. Shifted declustering layout obtains optimal parallelism in a wide range of configurations, and obtains optimal high performance and load balancing in both fault-free and degraded modes. Our theoretical proofs and comprehensive simulation results show that shifted declustering is superiour in performance and load balancing to traditional replication layout schemes such as standard mirroring, chained declustering, group rotational declustering and existing parity layout schemes PRIME and RELPR in reference [4].
EVENODD: an optimal scheme for tolerating double disk failures in RAID architectures We present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD is the first known scheme for tolerating double disk failures that is optimal with regard to both storage and performance. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The only previously known scheme that employes optimal redundant storage (i.e. two extra disks) is based on Reed-Solomon (RS) error-correcting codes, requires computation over finite fields and results in a more complex implementation. For example, we show that the number of exclusive-OR operations involved in implementing EVENODD in a disk array with 15 disks is about 50% of the number required when using the RS scheme.
Performance analysis of disk arrays under failure Disk arrays (RAID) have been proposed as a possi- ble approach to solving the emerging I/O bottleneck problem. The performance of a RAID system when all disks are operational and the MTTF,,, (mean time to system failure) have been well studied. How- ever, the performance of disk arrays in the presence of failed disks has not received much attention. The same techniques that provide the storage efficient re- dundancy of a RAID system can also result in a sig- nificant performance hit when a single disk fails. This is of importance since single disk failures are expected to be relatively frequent in a system with a large num- ber of disks. In this paper we propose a new varia- tion of the RAID organization that has significant ad- vantages in both reducing the magnitude of the per- formance degradation when there is a single failure and can also reduce the MTTF,,,. We also discuss several strategies that can be implemented to speed the rebuild of the failed disk and thus increase the MTTF,,,. The efficacy of these strategies is shown to require the improved properties of the new RAID organization. An analysis is carried out to quantify the tradeoffs.
Write-only disk caches With recent declines in the cost of semiconductor memory and the increasing need for high performance I/O disk systems, it makes sense to consider the design of large caches. In this paper, we consider the effect of caching writes. We show that cache sizes in the range of a few percent allow writes to be performed at negligible or no cost and independently of locality considerations.
Extended ephemeral logging: log storage management for applications with long lived transactions Extended ephemeral logging (XEL) is a new technique for managing a log of database activity subject to the general assumption that the lifetimes of an application’s transactions may be statistically distributed over a wide range. The log resides on nonvolatile disk storage and provides fault tolerance to system failures (in which the contents of volatile main memory storage may be lost). XEL segments a log into a chain of fixed-size FIFO queues and performs generational garbage collection on records in the log. Log records that are no longer necessary for recovery purposes are “thrown away” when they reach the head of a queue; only records that are still needed for recovery are forwarded from the head of one queue to the tail of the next. XEL does not require checkpoints, permits fast recovery after a crash and is well suited for applications that have a wide distribution of transaction lifetimes. Quantitative evaluation of XEL via simulation indicates that it can significantly reduce the disk space required for the log, at the expense of slightly higher bandwidth for log information and more main memory; the reduced size of the log permits much faster recovery after a crash as well as cost savings. XEL can significantly reduce both the disk space and the disk bandwidth required for log information in a system that has been augmented with a nonvolatile region of main memory.
Reliability Analysis of Disk Array Organizations by Considering Uncorrectable Bit Errors In this paper, we present an analytic model to study the reliability of some important disk array organizations that have been proposed by others in the literature. These organizations are based on the combination of two options for the data layout, regular RAID-5 and block designs, and three alternatives for sparing, hot sparing, distributed sparing and parity sparing. Uncorrectable bit errors have big effects on reliability but are ignored in traditional reliability analysis of disk arrays. We consider both disk failures and uncorrectable bit errors in the model. The reliability of disk arrays is measured in terms of MTTDL (Mean Time To Data Loss). A unified formula of MTTDL has been derived for these disk array organizations. The MTTDLs of these disk array organizations are also compared using the analytic model.
LRFU: A Spectrum of Policies that Subsumes the Least Recently Used and Least Frequently Used Policies Efficient and effective buffering of disk blocks in main memory is critical for better file system performance due to a wide speed gap between main memory and hard disks. In such a buffering system, one of the most important design decisions is the block replacement policy that determines which disk block to replace when the buffer is full. In this paper, we show that there exists a spectrum of block replacement policies that subsumes the two seemingly unrelated and independent Least Recently Used (LRU) and Least Frequently Used (LFU) policies. The spectrum is called the LRFU (Least Recently/Frequently Used) policy and is formed by how much more weight we give to the recent history than to the older history. We also show that there is a spectrum of implementations of the LRFU that again subsumes the LRU and LFU implementations. This spectrum is again dictated by how much weight is given to recent and older histories and the time complexity of the implementations lies between O(1) (the time complexity of LRU) and {\rm O}(\log_2 n) (the time complexity of LFU), where n is the number of blocks in the buffer. Experimental results from trace-driven simulations show that the performance of the LRFU is at least competitive with that of previously known policies for the workloads we considered.
An action language based on causal explanation: preliminary report Action languages serve for describing changes that are caused by performing actions. We define a new action language C, based on the theory of causal explanation proposed recently by McCain and Turner, and illustrate its expressive power by applying it to a number of examples. The mathematical results presented in the paper relate C to the Baral-Gelfond theory of concurrent actions.
Investigation of the CasCor Family of Learning Algorithms. Six learning algorithms are investigated and compared empirically. All of them are based on variants of the candidate training idea of the Cascade Correlation method. The comparison was performed using 42 different datasets from the PROBEN1 benchmark collection. The results indicate: (1) for these problems it is slightly better not to cascade the hidden units; (2) error minimization candidate training is better than covariance maximization for regression problems but may be a little worse for classification problems; (3) for most learning tasks, considering validation set errors during the selection of the best candidate will not lead to improved networks, but for a few tasks it will. Copyright 1997 Elsevier Science Ltd.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.014535
0.024415
0.016278
0.007771
0.007407
0.003587
0.001958
0.000729
0.000096
0.000011
0
0
0
0
Hamming Filters: A Dynamic Signature File Organization for Parallel Stores
Dynamic Multi-Resource Load Balancing in Parallel Database Systems
NCR 3700 - The Next-Generation Industrial Database Computer
The SEQUOIA 2000 Project This paper describes the objectives of the SEQUOIA 2000 project and the software development that is being done to achieve these objectives. In addition, several lessons relevant to Geographic Information Systems (GIS) that have have been learned from the project are explained.
Proceedings of the 1988 ACM SIGMOD International Conference on Management of Data, Chicago, Illinois, June 1-3, 1988
Extended ephemeral logging: log storage management for applications with long lived transactions Extended ephemeral logging (XEL) is a new technique for managing a log of database activity subject to the general assumption that the lifetimes of an application’s transactions may be statistically distributed over a wide range. The log resides on nonvolatile disk storage and provides fault tolerance to system failures (in which the contents of volatile main memory storage may be lost). XEL segments a log into a chain of fixed-size FIFO queues and performs generational garbage collection on records in the log. Log records that are no longer necessary for recovery purposes are “thrown away” when they reach the head of a queue; only records that are still needed for recovery are forwarded from the head of one queue to the tail of the next. XEL does not require checkpoints, permits fast recovery after a crash and is well suited for applications that have a wide distribution of transaction lifetimes. Quantitative evaluation of XEL via simulation indicates that it can significantly reduce the disk space required for the log, at the expense of slightly higher bandwidth for log information and more main memory; the reduced size of the log permits much faster recovery after a crash as well as cost savings. XEL can significantly reduce both the disk space and the disk bandwidth required for log information in a system that has been augmented with a nonvolatile region of main memory.
Multi-disk B-trees
Distributing a B+-tree in a loosely coupled environment We consider the problem of maintaining a data file which must be distributed among disks, each controlled by a processor and residing in a different site. The pairs, of disks and processors, are connected via a local broadcast network. A simple and practical method is presented.
On Variable Scope of Parity Protection in Disk Arrays In a common form of a RAID 5 architecture, data is organized on a disk array consisting of N + 1 disks into stripes of N data blocks and one parity block (with parity block locations staggered so as to balance the number of parity blocks on each disk). This allows data to be recovered in the event of a single disk failure. Here we consider an extension to this architecture in which parity information applies to arbitrary subsets of the data blocks in each stripe. Using several simplifying assumptions, we present simulation and analytic results that provide estimates of the improvement using this approach, in terms of total I/O operations, as compared to 1) conventional RAID 5 under a random single-block write workload, and 2) the use of a log-structured file system in which data is written out in stripes. Results on the reduction of disk recovery costs are also presented.
Measurements of a distributed file system We analyzed the user-level file access patterns and caching behavior of the Sprite distributed file system. The first part of our analysis repeated a study done in 1985 of the: BSD UNIX file system. We found that file throughput has increased by a factor of 20 to an average of 8 Kbytes per second per active user over 10-minute intervals, and that the use of process migration for load sharing increased burst rates by another factor of six. Also, many more very large (multi-megabyte) files are in use today than in 1985. The second part of our analysis measured the behavior of Sprite's main-memory file caches. Client-level caches average about 7 Mbytes in size (about one-quarter to one-third of main memory) and filter out about 50% of the traffic between clients and servers. 35% of the remaining server traffic is caused by paging, even on workstations with large memories. We found that client cache consistency is needed to prevent stale data errors, but that it is not invoked often enough to degrade overall system performance.
Deconstructing Network Attached Storage systems Network Attached Storage (NAS) has been gaining general acceptance, because it can be managed easily and files shared among many clients, which run different operating systems. The advent of Gigabit Ethernet and high speed transport protocols further facilitates the wide adoption of NAS. A distinct feature of NAS is that NAS involves both network I/O and file I/O. This paper analyzes the layered architecture of a typical NAS and the data flow, which travels through the layers. Several benchmarks are employed to explore the overhead involved in the layered NAS architecture and to identify system bottlenecks. The test results indicate that a Gigabit network is the system bottleneck due to the performance disparity between the storage stack and the network stack. The tests also demonstrate that the performance of NAS has lagged far behind that of the local storage subsystem, and the CPU utilization is not as high as imagined. The analysis in this paper gives three implications for the NAS, which adopts a Gigabit network: (1) The most effective method to alleviate the network bottleneck is increasing the physical network bandwidth or improving the utilization of network. For example, a more efficient network file system could boost the NAS performance. (2) It is unnecessary to employ specific hardware to increase the performance of the storage subsystem or the efficiency of the network stack because the hardware cannot contribute to the overall performance improvement. On the contrary, the hardware methods could have side effect on the throughput due to the small file accesses in NAS. (3) Adding more disk drives to an NAS when the aggregate performance reaches the saturation point can only contribute to storage capacity, but not performance. This paper aims to guide NAS designers or administrators to better understand and achieve a cost-effective NAS.
A training algorithm for optimal margin classifiers A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. This article explores the new modalities of visibility engendered by new media, with a focus on the social networking site Facebook. Influenced by Foucault's writings on Panopticism - that is, the architectural structuring of visibility - this article argues for understanding the construction of visibility on Facebook through an architectural framework that pays particular attention to underlying software processes and algorithmic power. Through an analysis of EdgeRank, the algorithm structuring the flow of information and communication on Facebook's 'News Feed', I argue that the regime of visibility constructed imposes a perceived 'threat of invisibility' on the part of the participatory subject. As a result, I reverse Foucault's notion of surveillance as a form of permanent visibility, arguing that participatory subjectivity is not constituted through the imposed threat of an all-seeing vision machine, but by the constant possibility of disappearing and becoming obsolete.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.046159
0.056215
0.042197
0.033388
0.033388
0.016785
0.009452
0.001577
0.000048
0.00001
0.000001
0
0
0
Fault Tolerant Interleaved Switching Fabrics For Scalable High-Performance Routers Scalable high performance routers and switches are required to provide a larger number of ports, higher throughput, and good reliability. Most of today’s routers and switches are implemented using single crossbar as the switched fabric. The single crossbar complexity increases at O(N2) in terms of crosspoint number, which might become unacceptable for scalability as the port number (N) increases. A delta class self-routing multistage interconnection network (MIN) with the complexity of O(N 脳 log2N) has been widely used in the ATM switches. However, the reduction of the crosspoint number results in considerable internal blocking. A number of scalable methods have been proposed to solve this problem. One of them uses more stages with recirculation architecture to reroute the deflected packets, which greatly increase the latency. In this paper, we propose an interleaved multistage switching fabrics architecture and assess its throughput with an analytical model and simulations. We compare this novel scheme with some previous parallel architectures and show its benefits. From extensive simulations under different traffic patterns and fault models, our interleaved architecture achieves better performance than its counterpart of single panel fabric. Our interleaved scheme achieves speedups (over the single panel fabric) of 3.4 and 2.25 under uniform and hot-spot traffic patterns, respectively at maximum load (p=1). Moreover, the interleaved fabrics show great tolerance against internal hardware failures.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
ILP: Just Do It Inductive logic programming (ILP) is built on a foundation laid by research in other areas of computational logic. But in spite of this strong foundation, at 10 years of age ILP now faces a number of new challenges brought on by exciting application opportunities. The purpose of this paper is to interest researchers from other areas of computational logic in contributing their special skill sets to help ILP meet these challenges. The paper presents five future research directions for ILP and points to initial approaches or results where they exist. It is hoped that the paper will motivate researchers from throughout computational logic to invest some time into "doing" ILP.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Deep learning of chroma representation for cover song identification in compression domain Methods for identifying a cover song typically involve comparing the similarity of chroma features between the query song and another song in the data set. However, considerable time is required for pairwise comparisons. In addition, to save disk space, most songs stored in the data set are in a compressed format. Therefore, to eliminate some decoding procedures, this study extracted music information directly from the modified discrete cosine transform coefficients of advanced audio coding and then mapped these coefficients to 12-dimensional chroma features. The chroma features were segmented to preserve the melodies. Each chroma feature segment was trained and learned by a sparse autoencoder, a deep learning architecture of artificial neural networks. The deep learning procedure was to transform chroma features into an intermediate representation for dimension reduction. Experimental results from a covers80 data set showed that the mean reciprocal rank increased to 0.5 and the matching time was reduced by over 94% compared with traditional approaches.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Protecting and recovering database systems continuously Data protection is widely deployed in database systems, but the current technologies (e.g. backup, snapshot, mirroring and replication) can not restore database systems to any point in time. This means that data is less well protected than it ought to be. Continuous data protection (CDP) is a new way to protect and recover data, which changes the data protection focus from backup to recovery. We (1) present a taxonomy of the current CDP technologies and a strict definition of CDP, (2) describe a model of continuous data protection and recovery (CDP-R) that is implemented based on CDP technology, and (3) report a simple evaluation of CDP-R. We are confident that CDP-R continuously protect and recover database systems in the face of data loss, corruption and disaster, and that the key techniques of CDP-R are helpful to build a continuous data protections system, which can improve the reliability and availability of database systems and guarantee the business continuity.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Synthesizing and Executing Plans in Knowledge and Action Bases.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Phase Transitions in Classical Planning: An Experimental Study. Phase transitions in the solubility of problem instances are known in many types of computational problems relevant for artificial intelligence, most notably for the satisfiability problem of the classical propositional logic. However, phase transitions in classical planning have received far less attention. Bylander has investigated phase transitions theoretically as well as experimentally by using simplified planning algorithms, and shown that most of the soluble problems can be solved by a naïve hill-climbing algorithm. Because of the simplicity of his algorithms he did not investigate hard problems on the phase transition region. In this paper, we address exactly this problem. We introduce two new models of problem instances, one eliminating the most trivially insoluble instances from Bylander’s model, and the other restricting the class of problem instances further. Then we perform experiments on the behavior of different types of planning algorithms on hard problems from the phase transition region, showing that a planner based on general-purpose satisfiability algorithms outperforms two planners based on heuristic local search.
Planning with Preferences Automated planning is a branch or AI that addresses the problem of generating a set of actions to achieve a specified goal state, given an initial state of the world. It is an active area of research that is central to the development of intelligent agents and autonomous robots. In many real-world applications, a multitude of valid plans exist, and a user distinguishes plans of high qnality by how well they adhere to the user's preferences. To generate such high-quality plans automatically, a planning system must provide a means of specifying the user's preferences with respect to the planning task, as well as a means of generating plans that ideally optimize these preferences. In the last few years, there has been significant research in the area of planning with preferences. In this article we review current approaches to preference representation for planning as well as overviewing and contrasting the various approaches to generating preferred plans that have been developed to date.
Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in SAT-Based Planning In Verification and in (optimal) AI Planning, a successful method is to formulate the application as boolean satisfiability ( SAT), and solve it with state-of-the-art DPLL-based procedures. There is a lack of understanding of why this works so well. Focussing on the Planning context, we identify a form of problem structure concerned with the symmetrical or asymmetrical nature of the cost of achieving the individual planning goals. We quantify this sort of structure with a simple numeric parameter called AsymRatio, ranging between 0 and 1. We run experiments in 10 benchmark domains from the International Planning Competitions since 2000; we show that AsymRatio is a good indicator of SAT solver performance in 8 of these domains. We then examine carefully crafted synthetic planning domains that allow control of the amount of structure, and that are clean enough for a rigorous analysis of the combinatorial search space. The domains are parameterized by size, and by the amount of structure. The CNFs we examine are unsatisfiable, encoding one planning step less than the length of the optimal plan. We prove upper and lower bounds on the size of the best possible DPLL refutations, under different settings of the amount of structure, as a function of size. We also identify the best possible sets of branching variables (backdoors). With minimum AsymRatio, we prove exponential lower bounds, and identify minimal backdoors of size linear in the number of variables. With maximum AsymRatio, we identify logarithmic DPLL refutations ( and backdoors), showing a doubly exponential gap between the two structural extreme cases. The reasons for this behavior - the proof arguments - illuminate the prototypical patterns of structure causing the empirical behavior observed in the competition benchmarks.
Engineering benchmarks for planning: the domains used in the deterministic part of IPC-4 In a field of research about general reasoning mechanisms, it is essential to have appropriate benchmarks. Ideally, the benchmarks should reflect possible applications of the developed technology. In AI Planning, researchers more and more tend to draw their testing examples from the benchmark collections used in the International Planning Competition (IPC). In the organization of (the deterministic part of) the fourth IPC, IPC-4, the authors therefore invested significant effort to create a useful set of benchmarks. They come from five different (potential) real-world applications of planning: airport ground traffic control, oil derivative transportation in pipeline networks, model-checking safety properties, power supply restoration, and UMTS call setup. Adapting and preparing such an application for use as a benchmark in the IPC involves, at the time, inevitable (often drastic) simplifications, as well as careful choice between, and engineering of, domain encodings. For the first time in the IPC, we used compilations to formulate complex domain features in simple languages such as STRIPS, rather than just dropping the more interesting problem constraints in the simpler language subsets. The article explains and discusses the five application domains and their adaptation to form the PDDL test suites used in IPC-4. We summarize known theoretical results on structural properties of the domains, regarding their computational complexity and provable properties of their topology under the h+ function (an idealized version of the relaxed plan heuristic). We present new (empirical) results illuminating properties such as the quality of the most wide-spread heuristic functions (planning graph, serial planning graph, and relaxed plan), the growth of propositional representations over instance size, and the number of actions available to achieve each fact; we discuss these data in conjunction with the best results achieved by the different kinds of planners participating in IPC-4.
Action Planning for Directed Model Checking of Petri Nets Petri nets are fundamental to the analysis of distributed systems especially infinite-state systems. Finding a particular marking corresponding to a property violation in Petri nets can be reduced to exploring a state space induced by the set of reachable markings. Typical exploration(reachability analysis) approaches are undirected and do not take into account any knowledge about the structure of the Petri net. This paper proposes heuristic search for enhanced exploration to accelerate the search. For different needs in the system development process, we distinguish between different sorts of estimates. Treating the firing of a transition as an action applied to a set of predicates induced by the Petri net structure and markings, the reachability analysis can be reduced to finding a plan to an AI planning problem. Having such a reduction broadens the horizons for the application of AI heuristic search planning technology. In this paper we discuss the transformations schemes to encode Petri nets into PDDL. We show a concise encoding of general place-transition nets in Level 2 PDDL2.2, and a specification for bounded place-transition nets in ADL/STRIPS. Initial experiments with an existing planner are presented.
Complexity results for standard benchmark domains in planning The efficiency of AI planning systems is usually evaluated empirically. For the validity of conclusions drawn from such empirical data, the problem set used for evaluation is of critical importance. In planning, this problem set usually, or at least often, consists of tasks from the various planning domains used in the first two international planning competitions, hosted at the 1998 and 2000 AIPS conferences. It is thus surprising that comparatively little is known about the properties of these benchmark domains, with the exception of BLOCKSWORLD, which has been studied extensively by several research groups.In this contribution, we try to remedy this fact by providing a map of the computational complexity of non-optimal and optimal planning for the set of domains used in the competitions. We identify a common transportation theme shared by the majority of the benchmarks and use this observation to define and analyze a general transportation problem that generalizes planning in several classical domains such as LOGISTICS, MYSTERY and GRIPPER. We then apply the results of that analysis to the actual transportation domains from the competitions. We next examine the remaining benchmarks, which do not exhibit a strong transportation feature, namely SCHEDULE and FREECELL.Relating the results of our analysis to empirical work on the behavior of the recently very successful FF planning system, we observe that our theoretical results coincide well with data obtained from empirical investigations.
A machine program for theorem-proving The programming of a proof procedure is discussed in connection with trial runs and possible improvements.
QBF modeling: exploiting player symmetry for simplicity and efficiency Quantified Boolean Formulas (QBFs) present the next big challenge for automated propositional reasoning. Not surprisingly, most of the present day QBF solvers are extensions of successful propositional satisfiability algorithms (SAT solvers). They directly integrate the lessons learned from SAT research, thus avoiding re-inventing the wheel. In particular, they use the standard conjunctive normal form (CNF) augmented with layers of variable quantification for modeling tasks as QBF. We argue that while CNF is well suited to “existential reasoning” as demonstrated by the success of modern SAT solvers, it is far from ideal for “universal reasoning” needed by QBF. The CNF restriction imposes an inherent asymmetry in QBF and artificially creates issues that have led to complex solutions, which, in retrospect, were unnecessary and sub-optimal. We take a step back and propose a new approach to QBF modeling based on a game-theoretic view of problems and on a dual CNF-DNF (disjunctive normal form) representation that treats the existential and universal parts of a problem symmetrically. It has several advantages: (1) it is generic, compact, and simpler, (2) unlike fully non-clausal encodings, it preserves the benefits of pure CNF and leverages the support for DNF already present in many QBF solvers, (3) it doesn't use the so-called indicator variables for conversion into CNF, thus circumventing the associated illegal search space issue, and (4) our QBF solver based on the dual encoding (Duaffle) consistently outperforms the best solvers by two orders of magnitude on a hard class of benchmarks, even without using standard learning techniques.
Expressiveness and tractability in knowledge representation and reasoning
CP-nets: a tool for representing and reasoning with conditional ceteris paribus preference statements Information about user preferences plays a key role in automated decision making. In many domains it is desirable to assess such preferences in a qualitative rather than quantitative way. In this paper, we propose a qualitative graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is often compact and arguably quite natural in many circumstances. We provide a formal semantics for this model, and describe how the structure of the network can be exploited in several inference tasks, such as determining whether one outcome dominates (is preferred to) another, ordering a set outcomes according to the preference relation, and constructing the best outcome subject to available evidence.
SLX—a top-down derivation procedure for programs with explicit negation In this paper we define a sound and (theoretically) complete top-down derivationprocedure for a well-founded semantics of logic programs extended with explicitnegation (WFSX). By its nature, it is amenable to a simple interpreter implementationin Prolog, and readily allows pre-processing into Prolog, showing promise asan efficient basis for further development.
A Deep and Tractable Density Estimator. The Neural Autoregressive Distribution Estimator (NADE) and its real-valued version RNADE are competitive density models of multidimensional data across a variety of domains. These models use a fixed, arbitrary ordering of the data dimensions. One can easily condition on variables at the beginning of the ordering, and marginalize out variables at the end of the ordering, however other inference tasks require approximate inference. In this work we introduce an efficient procedure to simultaneously train a NADE model for each possible ordering of the variables, by sharing parameters across all these models. We can thus use the most convenient model for each inference task at hand, and ensembles of such models with different orderings are immediately available. Moreover, unlike the original NADE, our training procedure scales to deep models. Empirically, ensembles of Deep NADE models obtain state of the art density estimation performance.
Towards Active Logic Programming. In this paper we present the new logic programming language DALI, aimed at defining agents and agent systems. A main design objective for DALI has been that of introducing in a declarative fashion all the essential features, while keeping the language as close as possible to the syntax and semantics of the plain Horn--clause language. Special atoms and rules have been introduced, for representing: external events, to which the agent is able to respond (reactivity); actions (reactivity and proactivity); internal events (previous conclusions which can trigger further activity); past and present events (to be aware of what has happened). An extended resolution is provided, so that a DALI agent is able to answer queries like in the plain Horn--clause language, but is also able to cope with the different kinds of events, and exhibit a (rational) reactive and proactive behaviour.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.043886
0.024
0.016235
0.01338
0.008595
0.004656
0.001346
0.000439
0.00007
0.000005
0
0
0
0
An Introduction to Restricted Boltzmann Machines.
A survey of machine learning for big data processing. There is no doubt that big data are now rapidly expanding in all science and engineering domains. While the potential of these massive data is undoubtedly significant, fully making sense of them requires new ways of thinking and novel learning techniques to address the various challenges. In this paper, we present a literature survey of the latest advances in researches on machine learning for big data processing. First, we review the machine learning techniques and highlight some promising learning methods in recent studies, such as representation learning, deep learning, distributed and parallel learning, transfer learning, active learning, and kernel-based learning. Next, we focus on the analysis and discussions about the challenges and possible solutions of machine learning for big data. Following that, we investigate the close connections of machine learning with signal processing techniques for big data processing. Finally, we outline several open issues and research trends.
How to Center Binary Restricted Boltzmann Machines.
Improving Content-based and Hybrid Music Recommendation using Deep Learning Existing content-based music recommendation systems typically employ a \\textit{two-stage} approach. They first extract traditional audio content features such as Mel-frequency cepstral coefficients and then predict user preferences. However, these traditional features, originally not created for music recommendation, cannot capture all relevant information in the audio and thus put a cap on recommendation performance. Using a novel model based on deep belief network and probabilistic graphical model, we unify the two stages into an automated process that simultaneously learns features from audio content and makes personalized recommendations. Compared with existing deep learning based models, our model outperforms them in both the warm-start and cold-start stages without relying on collaborative filtering (CF). We then present an efficient hybrid method to seamlessly integrate the automatically learnt features and CF. Our hybrid method not only significantly improves the performance of CF but also outperforms the traditional feature mbased hybrid method.
Detonation Classification from Acoustic Signature with the Restricted Boltzmann Machine We compare the recently proposed Discriminative Restricted Boltzmann Machine (DRBM) to the classical Support Vector Machine (SVM) on a challenging classification task consisting in identifying weapon classes from audio signals. The three weapon classes considered in this work (mortar, rocket, and rocket-propelled grenade), are difficult to reliably classify with standard techniques because they tend to have similar acoustic signatures. In addition, specificities of the data available in this study make it challenging to rigorously compare classifiers, and we address methodological issues arising from this situation. Experiments show good classification accuracy that could make these techniques suitable for fielding on autonomous devices. DRBMs appear to yield better accuracy than SVMs, and are less sensitive to the choice of signal preprocessing and model hyperparameters. This last property is especially appealing in such a task where the lack of data makes model validation difficult. (10Roughly speaking, the number of DOF in the regression residuals is computed as the number of observations in the training set minus the number of parameters that are part of the regression model. © 2012 Wiley Periodicals, Inc.)
Enhanced gradient for training restricted Boltzmann machines. Restricted Boltzmann machines (RBMs) are often used as building blocks in greedy learning of deep networks. However, training this simple model can be laborious. Traditional learning algorithms often converge only with the right choice of metaparameters that specify, for example, learning rate scheduling and the scale of the initial weights. They are also sensitive to specific data representation. An equivalent RBM can be obtained by flipping some bits and changing the weights and biases accordingly, but traditional learning rules are not invariant to such transformations. Without careful tuning of these training settings, traditional algorithms can easily get stuck or even diverge. In this letter, we present an enhanced gradient that is derived to be invariant to bit-flipping transformations. We experimentally show that the enhanced gradient yields more stable training of RBMs both when used with a fixed learning rate and an adaptive one.
Deep belief networks are compact universal approximators Deep belief networks (DBN) are generative models with many layers of hidden causal variables, recently introduced by Hinton, Osindero, and Teh (2006), along with a greedy layer-wise unsupervised learning algorithm. Building on Le Roux and Bengio (2008) and Sutskever and Hinton (2008), we show that deep but narrow generative networks do not require more parameters than shallow ones to achieve universal approximation. Exploiting the proof technique, we prove that deep but narrow feedforward neural networks with sigmoidal units can represent any Boolean expression.
Learning Deep Architectures for AI Theoretical results suggest that in order to learn the kind of com-plicated functions that can represent high-level abstractions (e.g., invision, language, and other AI-level tasks), one may needdeep architec-tures. Deep architectures are composed of multiple levels of non-linearoperations, such as in neural nets with many hidden layers or in com-plicated propositional formulae re-using many sub-formulae. Searchingthe parameter space of deep architectures is a difficult task, but learningalgorithms such as those for Deep Belief Networks have recently beenproposed to tackle this problem with notable success, beating the state-of-the-art in certain areas. This monograph discusses the motivationsand principles regarding learning algorithms for deep architectures, inparticular those exploiting as building blocks unsupervised learning ofsingle-layer models such as Restricted Boltzmann Machines, used toconstruct deeper models such as Deep Belief Networks.
Beyond Doctors: Future Health Prediction from Multimedia and Multimodal Observations Although chronic diseases cannot be cured, they can be effectively controlled as long as we understand their progressions based on the current observational health records, which is often in the form of multimedia data. A large and growing body of literature has investigated the disease progression problem. However, far too little attention to date has been paid to jointly consider the following three observations of the chronic disease progression: 1) the health statuses at different time points are chronologically similar; 2) the future health statuses of each patient can be comprehensively revealed from the current multimedia and multimodal observations, such as visual scans, digital measurements and textual medical histories; and 3) the discriminative capabilities of different modalities vary significantly in accordance to specific diseases. In the light of these, we propose an adaptive multimodal multi-task learning model to co-regularize the modality agreement, temporal progression and discriminative capabilities of different modalities. We theoretically show that our proposed model is a linear system. Before training our model, we address the data missing problem via the matrix factorization approach. Extensive evaluations on a real-world Alzheimer's disease dataset well verify our proposed model. It should be noted that our model is also applicable to other chronic diseases.
Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives
Natural events This paper develops an inductive theory of predictive common sense reasoning. The theory provides the basis for an integrated solution to the three traditional problems of reasoning about change; the frame, qualification, and ramification problems. The theory is also capable of representing non-deterministic events, and it provides a means for stating defeasible preferences over the outcomes of conflicting simultaneous events.
Bounded Nondeterminism and Alternation in Parameterized Complexity Theory We give machine characterisations and logical descriptions of a number of parameterized complexity classes. The focus of our attention is the class W[P], which we characterise as the class of all parameterized problems decidable by a nondeterministic fixed-parameter tractable algorithm whose use of nondeterminism is bounded in terms of the parameter We give similar characterisations for AW[P], the "alternating version of W[P]", and various other parameterized complexity classes. We also give logical characterisations of the classes W[P] and AW[P] in terms of fragments of least fixed-point logic, thereby putting these two classes into a uniform framework that we have developed in earlier work. Furthermore, we investigate the relation between alternation and space in parameterized complexity theory. In this context, we prove that the COMPACT TURING MACHINE COMPUTATION problem, shown to be hard for the class AW[SAT] in [1], is complete for the class uniform-XNL.
Solving Simple Planning Problems with More Inference and No Search Many benchmark domains in AI planning including Blocks, Logistics, Gripper, Satellite, and others lack the interactions that characterize puzzles and can be solved non-optimally in low polynomial time. They are indeed easy problems for people, although as with many other problems in AI, not always easy for machines. In this paper, we address the question of whether simple problems such as these can be solved in a simple way, i.e., without search, by means of a domain-independent planner. We address this question empirically by extending the constraint-based planner CPT with additional domain-independent inference mechanisms. We show then for the first time that these and several other benchmark domains can be solved with no backtracks while performing only polynomial node operations. This is a remarkable finding in our view that suggests that the classes of problems that are solvable without search may be actually much broader than the classes that have been identified so far by work in Tractable Planning.
Computational approaches to finding and measuring inconsistency in arbitrary knowledge bases There is extensive theoretical work on measures of inconsistency for arbitrary formulae in knowledge bases. Many of these are defined in terms of the set of minimal inconsistent subsets (MISes) of the base. However, few have been implemented or experimentally evaluated to support their viability, since computing all MISes is intractable in the worst case. Fortunately, recent work on a related problem of minimal unsatisfiable sets of clauses (MUSes) offers a viable solution in many cases. In this paper, we begin by drawing connections between MISes and MUSes through algorithms based on a MUS generalization approach and a new optimized MUS transformation approach to finding MISes. We implement these algorithms, along with a selection of existing measures for flat and stratified knowledge bases, in a tool called mimus. We then carry out an extensive experimental evaluation of mimus using randomly generated arbitrary knowledge bases. We conclude that these measures are viable for many large and complex random instances. Moreover, they represent a practical and intuitive tool for inconsistency handling.
1.029916
0.034286
0.029073
0.017143
0.015884
0.005836
0.001824
0.000107
0.000008
0.000001
0
0
0
0
Computational Logic - CL 2000, First International Conference, London, UK, 24-28 July, 2000, Proceedings
A Documentation Generator for (C)LP Systems We describe lpdoc, a tool which generates documentation manuals automatically from one or more logic program source files, written in Ciao, ISO-Prolog, and other (C)LP languages. It is particularly useful for documenting library modules, for which it automatically generates a rich description of the module interface. However, it can also be used quite successfully to document full applications. A fundamental advantage of using lpdoc is that it helps maintaining a true correspondence between the program and its documentation, and also identifying precisely to what version of the program a given printed manual corresponds. The quality of the documentation generated can be greatly enhanced by including within the program text assertions (declarations with types, modes, etc. ...) for the predicates in the program, and machine-readable comments.One of the main novelties of lpdoc is that these assertions and comments are written using the Ciao system assertion language, which is also the language of communication between the compiler and the user and between the components of the compiler. This allows a significant synergy among specification, debugging, documentation, optimization, etc. A simple compatibility library allows conventional (C)LP systems to ignore these assertions and comments and treat normally programs documented in this way. The documentation can be generated interactively from emacs or from the command line, in many formats including texinfo, dvi, ps, pdf, info, ascii, html/css,Unix nroff/man, Windows help, etc., and can include bibliographic citations and images. lpdoc can also generate "man" pages (Unix man page format), nicely formatted plain ASCII "readme" files, installation scripts useful when the manuals are included in software distributions, brief descriptions in html/css or info formats suitable for inclusion in on-line indices of manuals, and even complete WWW and info sites containing on-line catalogs of documents and software distributions. The lpdoc manual, all other Ciao system manuals, and parts of this paper are generated by lpdoc.
A Terminological Interpretation of (Abductive) Logic Programming The logic program formalism is commonly viewed as a modal or default logic. Inthis paper, we propose an alternative interpretation of the formalism as a terminologicallogic. A terminological logic is designed to represent two different forms of knowledge.A TBox represents definitions for a set of concepts. An ABox represents the assertionalknowledge of the expert. In our interpretation, a logic program is a TBox providingdefinitions for all predicates; this interpretation is present...
Design and Implementation of the Physical Layer in WebBases: The XRover Experience Webbases are database systems that enable creation of Web applications that allow end users to shop around for products and services at various Web sites without having to manually browse and fill out forms at each of these sites. In this paper we describe XRover which is an implementation of the physical layer of the webbase architecture. This layer is primarily responsible for automatically locating and extracting dynamic data from Web sites, i.e data that can only be obtained by form fill-outs. We discuss our experience in building XRover using FLORA, a deductive object-oriented system.
The logic of totally and partially ordered plans: a deductive database approach The problem of finding effective logic-based formalizations for problems involving actions remains one of the main application challenges of non-monotonic knowledge representation. In this paper, we show that complex planning strategies find natural logic-based formulations and efficient implementations in the framework of deductive database languages. We begin by modeling classical STRIPS-like totally ordered plans by means of Datalog_{1S} programs, and show that these programs have a stable model semantics that is also amenable to efficient computation. We then show that the proposed approach is quite expressive and flexible, and can also model partially ordered plans, which are abstract plans whereby each plan stands for a whole class of totally ordered plans. This results in a reduction of the search space and a subsequent improvement in efficiency.
Automata Theory for Reasoning About Actions In this paper, we show decidability of a rather expressive fragment of the situation calculus. We allow second order quantification over finite and infinite sets of situations. We do not impose a domain closure assumption on actions; therefore, infinite and even uncountable domains are allowed. The decision procedure is based on automata accepting infinite trees.
Well founded semantics for logic programs with explicit negation . The aim of this paper is to provide asemantics for general logic programs (with negation bydefault) extended with explicit negation, subsumingwell founded semantics [22].The Well Founded semantics for extended logicprograms (WFSX) is expressible by a default theorysemantics we have devised [11]. This relationshipimproves the cross--fertilization between logic programsand default theories, since we generalize previousresults concerning their relationship [3, 4, 7, 1, 2],and there is...
An algorithm for probabilistic planning We define the probabilistic planning problem in terms of a probability distributionover initial world states, a boolean combination of propositions representing the goal,a probability threshold, and actions whose effects depend on the execution-time stateof the world and on random chance. Adopting a probabilistic model complicates thedefinition of plan success: instead of demanding a plan that provably achieves the goal,we seek plans whose probability of success exceeds the threshold.In...
A goal-oriented approach to computing the well-founded semantics Global SLS resolution is an ideal procedural semantics for the well-founded semantics. We present a more effective variant of global SLS resolution, called XOLDTNF resolution, which incorporates simple mechanisms for loop detection and handling. Termination is guaranteed for all programs with the bounded-term-size property. We establish the soundness and (search space) completeness of XOLDTNF resolution. An implementation of XOLDTNF resolution in Prolog is available via FTP.
Representing Concurrent Actions and Solving Conflicts As an extension of the well{known Action Description lan- guage A introduced by M. Gelfond and V. Lifschitz (7), C. Baral and M. Gelfond recently deflned the dialect AC which allows the descrip- tion of concurrent actions (1). Also, a sound but incomplete encoding of AC by means of an extended logic program was presented there. In this paper, we work on interpretations of contradictory inferences from par- tial action descriptions. Employing an interpretation difierent from the one implicitly used in AC , we present a new dialect A + C , which allows to infer non-contradictory information from contradictory descriptions and to describe nondeterminism and uncertainty. Furthermore, we give the flrst sound and complete encoding of AC , using equational logic programming, and extend it to A+C as well.
Complexity Results for Default Reasoning from Conditional Knowledge Bases Conditional knowledge bases have been pro- posed as belief bases that include defeasible rules (also called defaults) of the form " ", which informally read as "generally, if then ." Such rules may have exceptions, which can be han- dled in different ways. A number of entail- ment semantics for conditional knowledge bases have been proposed in the literature. However, while the semantic properties and interrelation- ships of these formalisms are quite well under- stood, about their algorithmic properties only partial results are known so far. In this paper, we fill these gaps and draw a precise picture of the complexity of default reasoning from con- ditional knowledge bases: Given a conditional knowledge base and a default , does entail ? We classify the complex- ity of this problem for a number of well-known approaches (including Goldszmidt et al.'s maxi- mum entropy approach and Geffner's conditional entailment). We consider the general proposi- tional case as well as syntactic restrictions (in particular, to Horn and literal-Horn conditional knowledge bases). Furthermore, we analyze the effect of precomputing rankings for the respec- tive approaches. Our results complement and ex- tend previous results, and contribute in exploring the tractability/intractability frontier of default reasoning from conditional knowledge bases.
ARC: A Self-Tuning, Low Overhead Replacement Cache We consider the problem of cache management in a demand paging scenario with uniform page sizes. We propose a new cache management policy, namely, Adaptive Replacement Cache (ARC), that has several advantages.In response to evolving and changing access patterns, ARC dynamically, adaptively, and continually balances between the recency and frequency components in an online and self-tuning fashion. The policy ARC uses a learning rule to adaptively and continually revise its assumptions about the workload.The policy ARC is empirically universal, that is, it empirically performs as well as a certain fixed replacement policy-even when the latter uses the best workload-specific tuning parameter that was selected in an offline fashion. Consequently, ARC works uniformly well across varied workloads and cache sizes without any need for workload specific a priori knowledge or tuning. Various policies such as LRU-2, 2Q, LRFU, and LIRS require user-defined parameters, and, unfortunately, no single choice works uniformly well across different workloads and cache sizes.The policy ARC is simple-to-implement and, like LRU, has constant complexity per request. In comparison, policies LRU-2 and LRFU both require logarithmic time complexity in the cache size.The policy ARC is scan-resistant: it allows one-time se-quential requests to pass through without polluting the cache.On 23 real-life traces drawn from numerous domains, ARC leads to substantial performance gains over LRU for a wide range of cache sizes. For example, for a SPC1 like synthetic benchmark, at 4GB cache, LRU delivers a hit ratio of 9.19% while ARC achieves a hit ratio of 20%.
Processing in memory: the Terasys massively parallel PIM array This approach to processing in memory integrates single-instruction, multiple-data (SIMD) processing elements into the memory subsystem of a conventional computer. The processor-in-memory (PIM) chip is an enhanced 4-bit SRAM that associates a single-bit processor with each column of memory. To explore the viability of processing in memory, the authors built the Terasys workstation, a Sparcstation-2 augmented with 8 Mbytes of PIM memory holding 32K single-bit processors. They have also designed and implemented a high-level parallel language called data-parallel bit C (dbC). In normal memory mode, the PIM chips function as additional Sbus memory to the Sparc-2. In SIMD mode, the PIM chips accept commands from the Sparc-2 and execute those commands simultaneously on all PIM processors. Pairs of commands can be issued every 200 nanoseconds, giving an effective instruction issue rate of 100 ns. Peak performance for the 32K-processor system is 3.2 '1011 bit operations per second. Microcoded applications have reached (and in one case, exceeded) this theoretical peak, which is the equivalent of 25 Cray-YMP processors. With the successful creation of the Terasys research prototype, the authors have begun work on PIM in a supercomputer setting. In a collaborative research project with Cray Computer, they are incorporating a new Cray-designed implementation of the PIM chips into two octants of Cray-3 memory.
Improving Citation Polarity Classification With Product Reviews Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification.
1.002722
0.005426
0.004348
0.002713
0.002174
0.000725
0.000244
0.000051
0.000011
0.000002
0
0
0
0
Synthesis of Constraints for Mathematical Programming With One-Class Genetic Programming Mathematical programming (MP) models are common in optimization of real-world processes. Models are usually built by optimization experts in an iterative manner: an imperfect model is continuously improved until it approximates the reality well-enough and meets all technical requirements (e.g., linearity). To facilitate this task, we propose a genetic one-class constraint synthesis method (GOCCS). Given a set of exemplary states of normal operation of a business process, GOCCS synthesizes constraints in linear programming or nonlinear programming form. The synthesized constraints can be then paired with an arbitrary objective function and supplied to an off-the-shelf solver to find optimal parameters of the process. We assess GOCCS on three families of MP benchmarks and conclude promising results. We also apply it to a real-world process of wine production and optimize that process.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Deep long short-term memory based model for agricultural price forecasting Agricultural price forecasting is one of the research hotspots in time series forecasting due to its unique characteristics. In this paper, we developed a deep long short-term memory (DLSTM) based model for the accurate forecasting of a nonstationary and nonlinear agricultural prices series. DLSTM model is a type of deep neural network which is advantageous in capturing the nonlinear and volatile patterns by utilizing both the recurrent architecture and deep learning methodologies together. The study further compares the price forecasting ability of the developed DLSTM model with conventional time-delay neural network (TDNN) and ARIMA models using international monthly price series of maize and palm oil. The empirical results demonstrate the superiority of the developed DLSTM model over other models in terms of various forecasting evaluation criteria like root mean square error, mean absolute percentage error and mean absolute deviation. The DLSTM model also showed dominance over other models in predicting the directional change of those monthly price series. Moreover, the accuracy of the forecasts obtained by all the models is also evaluated using the Diebold–Mariano test and the Friedman test whose results validate that the DLSTM based model has a clear advantage over the other two models.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Extent Like Performance From A Unix File System In an effort to meet the increasing throughput demands on the SunOS file system made both by applications and higher performance hardware, several optimization paths were examined. The principal constraints were that the on -disk file system format remain the same and that whatever changes were necessary not be user-visible. The solution arrived at was to approximate the behavior of extent based file systems by grouping I/O operations into clusters instead of dealing in individual blocks. A single clustered I/O may take the place of 15-30 block I/Os, resulting in a factor of two increased sequential performance increase. The changes described were restricted to a small portion of the file system code; no user-visible changes were necessary and the on-disk format was not altered. This paper describes an enhancement to UFS that met all our goals. The remainder of the paper is divided into seven sections. The first section reviews the relevant background material. The second section discusses several possible solutions to the perfor- mance problems found in UFS. The third section describes the implementation of the solution we chose: file system I/O clustering. The fourth section discusses problems found in the interaction between the file system and the VM systems. The next section presents performance measurements of the modified file system. The sixth section compares this work to other work in this area. The final section discusses possible future enhancements. Background
Model-based resource provisioning in a web service utility Internet service utilities host multiple server applications on a shared server cluster. A key challenge for these systems is to provision shared resources on demand to meet service quality targets at least cost. This paper presents a new approach to utility resource management focusing on coordinated provisioning of memory and storage resources. Our approach is model-based: it incorporates internal models of service behavior to predict the value of candidate resource allotments under changing load. The model-based approach enables the system to achieve important resource management goals, including differentiated service quality, performance isolation, storage-aware caching, and proactive allocation of surplus resources to meet performance goals. Experimental results with a prototype demonstrate model-based dynamic provisioning under Web workloads with static content.
DataMesh-parallel storage systems for the 1990s A description is given of DataMesh, a research project with the goal of developing software to maximize the performance of mass storage I/O, while providing high availability, ease of use, and scalability. The DataMesh hardware architecture is that of an array of disk nodes, with each disk having a dedicated 20-MIPS single-chip processor and 8-32 MBytes RAM. The nodes are linked by a fast, reliable, small-area network, and programmed to appear as a single storage server. Phase 1 of the DataMesh project will provide smart disk arrays; phase 2 will expand this to include file systems; and phase 3 will support parallel databases, data searches, and other application-specific functions. The initial target of the work is storage servers for groups of high-powered workstations, although the techniques are applicable to several different problems.<>
Performance comparison of thrashing control policies for concurrent Mergesorts with parallel prefetching We study the performance of various run-time thrashing control policies for the merge phase of concurrent mergesorts using parallel prefetching, where initial sorted runs are stored on multiple disks and the final sorted run is written back to another dedicated disk. Parallel prefetching via multiple disks can be attractive in reducing the response times for concurrent mergesorts. However, severe thrashing may develop due to imbalances between input and output rates, thus a large number of prefetched pages in the buffer can be replaced before referenced. We evaluate through detailed simulations three run-time thrashing control policies: (a) disabling prefetching, (b) forcing synchronous writes and (c) lowering the prefetch quantity in addition to forcing synchronous writes. The results show that (1) thrashing resulted from parallel prefetching can severely degrade the system response time; (2) though effective in reducing the degree of thrashing, disabling prefetching may worsen the response time since more synchronous reads are needed; (3) forcing synchronous writes can both reduce thrashing and improve the response time; (4) lowering the prefetch quantity in addition to forcing synchronous writes is most effective in reducing thrashing and improving the response time.
Application-controlled physical memory using external page-cache management Next generation computer systems will have gigabytes of physical memory and processors in the 100 MIPS range or higher. Contrary to some conjectures, this trend requires more sophisticated memory management support for memory-bound computations such as scientific simulations and systems such as large-scale database systems, even though memory management for most programs will be less of a concern. We describe the design, implementation and evaluation of a virtual memory system that provides application control of physical memory using external page-cache management. In this approach, a sophisticated application is able to monitor and control the amount of physical memory it has available for execution, the exact contents of this memory, and the scheduling and nature of page-in and page-out using the abstraction of a physical page cache provided by the kernel. We claim that this approach can significantly improve performance for many memory-bound applications while reducing kernel complexity, yet does not complicate other applications or reduce their performance.
The logical disk: a new approach to improving file systems The Logical Disk (LD) defines a new interface to disk storage that separates file management and disk management by using logical block numbers and block lists. The LD interface is designed to support multiple file systems and to allow multiple implementations, both of which are important given the increasing use of kernels that support multiple operating system personalities.A log-structured implementation of LD (LLD) demonstrates that LD can be implemented efficiently. LLD adds about 5% to 10% to the purchase cost of a disk for the main memory it requires. Combining LLD with an existing file system results in a log-structured file system that exhibits the same performance characteristics as the Sprite log-structured file system.
File system design for an NFS file server appliance Network Appliance Corporation recently began shipping a new kind of network server called an NFS file server appliance, which is a dedicated server whose sole function is to provide NFS file service. The file system requirements for an NFS appliance are different from those for a general-purpose UNIX system, both because an NFS appliance must be optimized for network file access and because an appliance must be easy to use. This paper describes WAFL (Write Anywhere File Layout), which is a file system designed specifically to work in an NFS appliance. The primary focus is on the algorithms and data structures that WAFL uses to implement Snapshotst, which are read-only clones of the active file system. WAFL uses a copy-on-write technique to minimize the disk space that Snapshots consume. This paper also describes how WAFL uses Snapshots to eliminate the need for file system consistency checking after an unclean shutdown.
RAID-II: a high-bandwidth network file server In 1989, the RAID (Redundant Arrays of Inexpensive Disks) group at U. C. Berkeley built a prototype disk array called RAID-I. The bandwidth delivered to clients by RAID-I was severely limited by the memory system bandwidth of the disk array' s host workstation. We designed our second prototype, RAID-H, to deliver more of the disk array bandwidth to file server clients. A custom-built crossbar memory system called the XBUS board connects the disks directly to the high-speed network, allowing data for large requests to bypass the server workstation. RAID-II runs Log-Structured File System (LFS) software to optimize performance for bandwidth-intensive applications.The RAID-II hardware with a single XBUS controller board delivers 20 megabytes/second for large, random read operations and up to 31 megabytes/second for sequential read operations. A preliminary implementation of LFS on RAID-II delivers 21 megabytes/second on large read requests and 15 megabytes/second on large write operations.
Disk caching in large database and timeshared systems We present the results of a variety of trace-driven simulations of disk cache designs using traces from a variety of mainframe timesharing and database systems in production use. We compute miss ratios, run lengths, traffic ratios, cache residency times, degree of memory pollution and other statistics for a variety of designs, varying lock size, prefetching algorithm and write algorithm. We find that for this workload, sequential prefetching produces a significant (about 20%) but still limited improvement in the miss ratio, even using a powerful technique for detecting sequentiality. Copy-back writing decreased write traffic relative to write-through by more than 50%; periodic flushing of the dirty blocks increased write traffic only slightly compared to pure write-back, and then only for large cache sizes. Write-allocate had little effect compared to no-write-allocate. Block sizes of over a track don't appear to be useful. Limiting cache occupancy by a single process or transaction appears to have little effect. This study is unique in the variety and quality of the data used in the studies
Hot Block Clustering for Disk Arrays with Dynamic Striping
LegionFS: a secure and scalable file system supporting cross-domain high-performance applications Realizing that current file systems can not cope with the diverse requirements of wide-area collaborations, researchers have developed data access facilities to meet their needs. Recent work has focused on comprehensive data access architectures. In order to fulfill the evolving requirements in this environment, we suggest a more fully-integrated architecture built upon the fundamental tenets of naming, security, scalability, extensibility, and adaptability. These form the underpinning of the Legion File System (LegionFS). This paper motivates the need for these requirements and presents benchmarks that highlight the scalability of LegionFS. LegionFS aggregate throughput follows the linear growth of the network, yielding an aggregate read bandwidth of 193.8 MB/s on a 100 Mbps Ethernet backplane with 50 simultaneous readers. The serverless architecture of LegionFS is shown to benefit important scientific applications, such as those accessing the Protein Data Bank, within both local- and wide-area environments.
Logic programs with exceptions We extend logic programming to deal with default reasoning by allowing the explicit representation of exceptions in addition to general rules. To formalise this extension, we modify the answer set semantics of Gelfond and Lifschitz, which allows both classical negation and negation as failure. We also propose a transformation which eliminates exceptions by using negation by failure. The transformed program can be implemented by standard logic programming methods, such as SLDNF. The explicit representation of rules and exceptions has the virtue of greater naturalness of expression. The transformed program, however, is easier to implement.
A Framework for Distributed Object-Oriented Testing
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
1.011696
0.009677
0.009524
0.008272
0.003642
0.002911
0.001575
0.000813
0.000097
0.000029
0.000003
0
0
0
The Refined Extension Principle for Semantics of Dynamic Logic Programming Over recent years, various semantics have been proposed for dealing with updates in the setting of logic programs. The availability of different semantics naturally raises the question of which are most adequate to model updates. A systematic approach to face this question is to identify general principles against which such semantics could be evaluated. In this paper we motivate and introduce a new such principle - the refined extension principle. Such principle is complied with by the stable model semantics for (single) logic programs. It turns out that none of the existing semantics for logic program updates, even though generalisations of the stable model semantics, comply with this principle. For this reason, we define a refinement of the dynamic stable model semantics for Dynamic Logic Programs that complies with the principle.
From logic programs updates to action description updates An important branch of investigation in the field of agents has been the definition of high level languages for representing effects of actions, the programs written in such languages being usually called action programs. Logic programming is an important area in the field of knowledge representation and some languages for specifying updates of Logic Programs had been defined. Starting from the update language Evolp, in this work we propose a new paradigm for reasoning about actions called Evolp action programs. We provide translations of some of the most known action description languages into Evolp action programs, and underline some peculiar features of this newly defined paradigm. One such feature is that Evolp action programs can easily express changes in the rules of the domains, including rules describing changes.
Consistency of Clark's completion and existence of stable models.
Formal Characterization of Active Databases . In this paper we take a first step towards characterizing activedatabases. Declarative characterization of active databases allowsadditional flexibility in studying the effects of different priority criteriabetween fireable rules, different actions and event definitions, andalso to make claims about effects of transaction and prove them withoutactually executing them. Our characterization is related but differentfrom similar attempts by Zaniolo in terms of making a clear distinction...
The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Evaluating collaborative filtering recommender systems Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.
Practical Issues in Temporal Difference Learning This paper examines whether temporal difference methods for training connectionist networks, such as Sutton's TD(λ) algorithm, can be successfully applied to complex real-world problems. A number of important practical issues are identified and discussed from a general theoretical perspective. These practical issues are then examined in the context of a case study in which TD(λ) is applied to learning the game of backgammon from the outcome of self-play. This is apparently the first application of this algorithm to a complex non-trivial task. It is found that, with zero knowledge built in, the network is able to learn from scratch to play the entire game at a fairly strong intermediate level of performance, which is clearly better than conventional commercial programs, and which in fact surpasses comparable networks trained on a massive human expert data set. This indicates that TD learning may work better in practice than one would expect based on current theory, and it suggests that further analysis of TD methods, as well as applications in other complex domains, may be worth investigating.
NP is as easy as detecting unique solutions For all known NP-complete problems the number of solutions in instances having solutions may vary over an exponentially large range. Furthermore, most of the well-known ones, such as satisfiability, are parsimoniously interreducible, and these can have any number of solutions between zero and an exponentially large number. It is natural to ask whether the inherent intractability of NP-complete problems is caused by this wide variation. In this paper we give a negative answer to this using randomized reductions. We show that the problems of distinguishing between instances of SAT having zero or one solution, or finding solutions to instances of SAT having unique solutions, are as hard as SAT itself. Several corollaries about the difficulty of specific problems follow. For example if the parity of the number of solutions of SAT can be computed in RP then NP = RP. Some further problems can be shown to be hard for NP or DP via randomized reductions.
A Stable Distributed Scheduling Algorithm
Application performance and flexibility on exokernel systems The exokemel operating system architecture safely gives untrusted software efficient control over hardware and software resources by separating management from protection. This paper describes an exokemel system that allows specialized applications to achieve high performance without sacrificing the performance of unmodified UNIX programs. It evaluates the exokemel architecture by measuring end-to-end application performance on Xok, an exokernel for Intel x86-based computers, and by comparing Xok’s performance to the performance of two widely-used 4.4BSD UNIX systems (FreeBSD and OpenBSD). The results show that common unmodified UNIX applications can enjoy the benefits of exokernels: applications either perform comparably on Xok/ExOS and the BSD UNIXes, or perform significantly better. In addition, the results show that customized applications can benefit substantially from control over their resources (e.g., a factor of eight for a Web server). This paper also describes insights about the exokemel approach gained through building three different exokemel systems, and presents novel approaches to resource multiplexing.
Exploring Gate-Limited Analytical Models for High Performance Network Storage Servers
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.066667
0.02
0.007692
0.003704
0
0
0
0
0
0
0
0
0
Deliberative acting, planning and learning with hierarchical operational models In AI research, synthesizing a plan of action has typically used descriptive models of the actions that abstractly specify what might happen as a result of an action, and are tailored for efficiently computing state transitions. However, executing the planned actions has needed operational models, in which rich computational control structures and closed-loop online decision-making are used to specify how to perform an action in a nondeterministic execution context, react to events and adapt to an unfolding situation. Deliberative actors, which integrate acting and planning, have typically needed to use both of these models together—which causes problems when attempting to develop the different models, verify their consistency, and smoothly interleave acting and planning.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
CLUS_GPU-BLASTP: accelerated protein sequence alignment using GPU-enabled cluster. Basic Local Alignment Search Tool (BLAST) is one of the most frequently used algorithms for bioinformatics applications. In this paper, an accelerated implementation of protein BLAST, i.e., CLUS_GPU-BLASTP for multiple query sequence processing in parallel, on graphical processing unit (GPU)-enabled high-performance cluster is proposed. The experimental setup consisted of a high-performance GPU-enabled cluster. Each compute node of the cluster consisted of two hex-core Intel, Xeon 2.93 GHz processors with 50 GB RAM and 12 MB cache. Each compute node was also equipped with a NVIDIA M2050 GPU. In comparison with the famous GPU-BLAST, our BLAST implementation is 2.1 times faster on single compute node. On a cluster of 12 compute nodes, our implementation gave a speedup of 13.2X. In comparison with standard single-threaded NCBI-BLAST, our implementation achieves a speedup ranging from 7.4X to 8.2X.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
SP-RAN: Self-Paced Residual Aggregated Network for Solar Panel Mapping in Weakly Labeled Aerial Images With the rapid development of the solar distribution, solar panel mapping is becoming increasingly valuable to decision-makers. Weakly supervised methods have been developed to reduce the cost in training sample collection, and the most successful ones follow the alternative training scheme, which first generates coarse object localizations as pseudo labels (PLs) and then utilizes these PLs to train an end-to-end network for object extraction. As remote sensing images are typically characterized by multiple occurrences of objects and complicated backgrounds, the alternative training scheme suffers from low mapping accuracy and deficient boundary maintenance due to the varying quality of PLs. In this article, we focus on addressing these problems by adaptively adjusting the contributions of quality-varying PLs and propose a novel self-paced residual aggregated network (SP-RAN) for solar panel mapping. Specifically, with the initial PLs generated by gradient-weighted class activation mapping, a residual aggregated network is designed for target mapping with special consideration for the capability in producing complete and well-shaped mapping results. Considering the inconsistent quality of PLs, an effective confidence-aware (CA) loss is developed to emphasize the contribution of high-quality PLs and alleviate the negative impacts brought by the bad-quality ones in the training phase. Moreover, to concentrate on boundary maintenance, a novel self-paced label correction (SP-LC) strategy is proposed to selectively update PLs by considering their reliability. Extensive experimental comparisons with state-of-the-art methods and ablation study on two aerial datasets and a remote sensing dataset demonstrate the superiority of the proposed method.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Logic Programming System for Nonmonotonic Reasoning The evolution of logic programming semantics has included the introduction of a new explicit form of negation, beside the older implicit (or default) negation typical of logic programming. The richer language has been shown adequate for a spate of knowledge representation and reasoning forms.
Efficient top-down computation of queries under the well-founded semantics The well-founded model provides a natural and robust semantics for logic programs with negative literals in rule bodies. Although various procedural semantics have been proposed for query evaluation under the well-founded semantics, the practical issues of implementation for effective and efficient computation of queries have been rarely discussed.
Default Theory for Well Founded Semantics with Explicit Negation One aim of this paper is to define a default theory for Well Founded Semantics of logic programs which have been extended with explicit negation, such that the models of a program correspond exactly to the extensions of the default theory corresponding to the program.
Extending Conventional Planning Techniques to Handle Actions with Context-Depen dent Effects
Proving Termination of General Prolog Programs We study here termination of general logic programs with the Prolog selection rule. To this end we extend the approach of Apt and Pedreschi [AP90] and consider the class of left terminating general programs. These are general logic programs that terminate with the Prolog selection rule for all ground goals. We introduce the notion of an acceptable program and prove that acceptable programs are left terminating. This provides us with a practical method of proving termination.
A Completeness Result for SLDNF-Resolution Because of the possibility of floundering and infinite derivations, SLDNF-resolution is, in general, not complete. The classical approach [17] to get a completeness result is to restrict the attention to normal programs P and normal goals G, such that P or {G} is allowed and P is hierarchical. Unfortunately, the class of all normal programs and all normal goals meeting these requirements is not powerful enough to be of great practical importance. But after refining the concept of allowedness by taking modes [12] into account, we can broaden the notion of a hierarchical program, and thereby define a subclass of the class of normal programs and normal goals which is powerful enough to compute all primitive recursive functions without losing the completeness of SLDNF-resolution.
Representing actions in logic programming and its applications in database updates
Propositional lower bounds: Algorithms and complexity Propositional greatest lower bounds (GLBs) are logically&dash;defined approximations of a knowledge base. They were defined in the context of Knowledge Compilation, a technique developed for addressing high computational cost of logical inference. A GLB allows for polynomial&dash;time complete on&dash;line reasoning, although soundness is not guaranteed. In this paper we propose new algorithms for the generation of a GLB. Furthermore, we give precise characterization of the computational complexity of the problem of generating such lower bounds, thus addressing in a formal way the question “how many queries are needed to amortize the overhead of compilation&quest;”
On the Desirability of Acyclic Database Schemes A class of database schemes, called acychc, was recently introduced. It is shown that this class has a number of desirable properties. In particular, several desirable properties that have been studied by other researchers m very different terms are all shown to be eqmvalent to acydicity. In addition, several equivalent charactenzauons of the class m terms of graphs and hypergraphs are given, and a smaple algorithm for determining acychclty is presented. Also given are several eqmvalent characterizations of those sets M of multivalued dependencies such that M is the set of muRlvalued dependencies that are the consequences of a given join dependency. Several characterizations for a conflict-free (in the sense of Lien) set of muluvalued dependencies are provided.
Domain-independent planning: representation and plan generation A domain-independent planning program that supports both automatic and interactive generation of hierarchical, partially ordered plans is described. An improved formalism makes extensive use of constraints and resources to represent domains and actions more powerfully. The formalism also offers efficient methods for representing properties of objects that do not change over time, allows specification of the plan rationale (which includes scoping of conditions and appropriately relating different levels in the hierarchy), and provides the ability to express deductive rules for deducing the effects of actions. The implications of allowing parallel actions in a plan or problem solution are discussed, and new techniques for efficiently detecting and remedying harmful parallel interactions are presented. The most important of these techniques, reasoning about resources, is emphasized and explained. The system supports concurrent exploration of different branches in the search, making best-first search easy to implement.
The first probabilistic track of the international planning competition The 2004 International Planning Competition, IPC-4, included a probabilistic planning track for the first time. We describe the new domain specification language we created for the track, our evaluation methodology, the competition domains we developed, and the results of the participating teams.
Principled design of the modern Web architecture The World Wide Web has succeeded in large part because its software architecture has been designed to meet the needs of an Internet-scale distributed hypermedia application. The modern Web architecture emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems. In this article we introduce the Representational State Transfer (REST) architectural style, developed as an abstract model of the Web architecture and used to guide our redesign and definition of the Hypertext Transfer Protocol and Uniform Resource Identifiers. We describe the software engineering principles guiding REST and the interaction constraints chosen to retain those principles, contrasting them to the constraints of other architectural styles. We then compare the abstract model to the currently deployed Web architecture in order to elicit mismatches between the existing protocols and the applications they are intended to support.
Total-order multi-agent task-network planning for contract bridge This paper describes the results of applying a modified version of hierarchical task-network (HTN) planning to the problem of declarer play in contract bridge. We represent information about bridge in a task network that is extended to represent multi-agency and uncertainty. Our game-playing procedure uses this task network to generate game trees in which the set of alternative choices is determined not by the set of possible actions, but by the set of available tactical and strategic schemes. This approach avoids the difficulties that traditional game-tree search techniques have with imperfect-information games such as bridge--but it also differs in several significant ways from the planning techniques used in typical HTN planners. We describe why these modifications were needed in order to build a successful planner for bridge. This same modified HTN planning strategy appears to be useful in a variety of application domains--for example, we have used the same planning techniques in a process-planning system for the manufacture of complex electro-mechanical devices (Hebbar et al. 1996). We discuss why the same technique has been successful in two such diverse domains.
Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture.
1.058405
0.025036
0.020239
0.020099
0.013397
0.005768
0.002412
0.000209
0.00003
0.000003
0
0
0
0
Proximal Methods for Sparse Hierarchical Dictionary Learning We propose to combine two approaches for mod- eling data admitting sparse representations: on the one hand, dictionary learning has proven ef- fective for various signal processing tasks. On the other hand, recent work on structured spar- sity provides a natural framework for modeling dependencies between dictionary elements. We thus consider a tree-structured sparse regulariza- tion to learn dictionaries embedded in a hierar- chy. The involved proximal operator is com- putable exactly via a primal-dual method, allow- ing the use of accelerated gradient techniques. Experiments show that for natural image patches, learned dictionary elements organize themselves in such a hierarchical structure, leading to an im- proved performance for restoration tasks. When applied to text documents, our method learns hi- erarchies of topics, thus providing a competitive alternative to probabilistic topic models.
DEFEATnet -- A Deep Conventional Image Representation for Image Classification To study underlying possibilities for the successes of conventional image representation and deep neural networks in image representation, we propose a DEep FEATure extraction, encoding, and pooling network (DEFEATnet) architecture, which is a marriage between conventional image representation approaches and deep neural networks. Particularly in DEFEATnet, each layer consists of three components: feature extraction, feature encoding, and pooling. The primary advantage of DEFATnet is two-fold: i) It consolidates the prior knowledge (e.g., translation invariance) from extracting, encoding and pooling handcrafted features, as in the conventional feature representation approaches; ii) It represents the object parts at different granularities by gradually increasing the local receptive fields in different layers, as in deep neural networks. Moreover, DEFEATnet is a generalized framework that can readily incorporate all types of local features as well as all kinds of well-designed feature encoding and pooling methods. Since prior knowledge is preserved in DEFEATnet, it is especially useful for image representation on small/medium size datasets where deep neural networks usually fail due to the lack of sufficient training data. Promising experimental results clearly show that DEFEATnets outperform shallow conventional image representation approaches by a large margin when the same type of features, feature encoding and pooling are used. The extensive experiments also demonstrate the effectiveness of the deep architecture of our DEFEATnet in improving the robustness for image presentation.
Learning Shared, Discriminative, and Compact Representations for Visual Recognition Dictionary-based and part-based methods are among the most popular approaches to visual recognition. In both methods, a mid-level representation is built on top of low-level image descriptors and high-level classifiers are trained on top of the mid-level representation. While earlier methods built the mid-level representation without supervision, there is currently great interest in learning both representations jointly to make the mid-level representation more discriminative. In this work we propose a new approach to visual recognition that jointly learns a shared, discriminative, and compact mid-level representation and a compact high-level representation. By using a structured output learning framework, our approach directly handles the multiclass case at both levels of abstraction. Moreover, by using a group-sparse prior in the structured output learning framework, our approach encourages sharing of visual words and thus reduces the number of words used to represent each class.We test our proposed method on several popular benchmarks. Our results show that, by jointly learning mid- and high-level representations, and fostering the sharing of discriminative visual words among target classes, we are able to achieve state-of-the-art recognition performance using far less visual words than previous approaches.
Proximal Methods for Hierarchical Sparse Coding Sparse coding consists in representing signals as sparse linear combinations of atoms selected from a dictionary. We consider an extension of this framework where the atoms are further assumed to be embedded in a tree. This is achieved using a recently introduced tree-structured sparse regularization norm, which has proven useful in several applications. This norm leads to regularized problems that are difficult to optimize, and in this paper, we propose efficient algorithms for solving them. More precisely, we show that the proximal operator associated with this norm is computable exactly via a dual approach that can be viewed as the composition of elementary proximal operators. Our procedure has a complexity linear, or close to linear, in the number of atoms, and allows the use of accelerated gradient techniques to solve the tree-structured sparse approximation problem at the same computational cost as traditional ones using the l1-norm. Our method is efficient and scales gracefully to millions of variables, which we illustrate in two types of applications: first, we consider fixed hierarchical dictionaries of wavelets to denoise natural images. Then, we apply our optimization tools in the context of dictionary learning, where learned dictionary elements naturally self-organize in a prespecified arborescent structure, leading to better performance in reconstruction of natural image patches. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models.
Efficient sparse coding algorithms Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that cap- ture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimiza- tion problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field sur- round suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.
Backpropagation Applied to Handwritten Zip Code Recognition. The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.
Training connectionist models for the structured language model We investigate the performance of the Structured Language Model (SLM) in terms of perplexity (PPL) when its components are modeled by connectionist models. The connectionist models use a distributed representation of the items in the history and make much better use of contexts than currently used interpolated or back-off models, not only because of the inherent capability of the connectionist model in fighting the data sparseness problem, but also because of the sublinear growth in the model size when the context length is increased. The connectionist models can be further trained by an EM procedure, similar to the previously used procedure for training the SLM. Our experiments show that the connectionist models can significantly improve the PPL over the interpolated and back-off models on the UPENN Treebank corpora, after interpolating with a baseline trigram language model. The EM training procedure can improve the connectionist models further, by using hidden events obtained by the SLM parser.
Slow feature analysis: unsupervised learning of invariances. Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.
Greedy Layer-Wise Training of Deep Networks Deep multi-layer neural networks have many levels of non-linearities, which allows them to potentially represent very compactly highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task.
Dependent Fluents We discuss the persistence of the indirect ef­ fects of an action—the question when such ef­ fects are subject to the commonsense law of in­ ertia, and how to describe their evolution in the cases when inertia does not apply. Our model of nonpersistent effects involves the assumption that the value of the fluent in question is deter­ mined by the values of other fluents, although the dependency may be partially or completely unknown. This view leads us to a new high- level action language ARD (for Actions, Ram­ ifications and Dependencies) that is capable of describing both persistent and nonpersistent ef­ fects. Unlike the action languages introduced in the past, ARD is "non-Markovia n," in the sense that the evolution of the fluents described in this language may depend on their history, and not only on their current values.
The Tractable Cognition Thesis. The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories of cognition. To utilize this constraint, a precise and workable definition of "computational tractability" is needed. Following computer science tradition, many cognitive scientists and psychologists define computational tractability as polynomial-time computability, leading to the P-Cognition thesis. This article explains how and why the P-Cognition thesis may be overly restrictive, risking the exclusion of veridical computational-level theories from scientific investigation. An argument is made to replace the P-Cognition thesis by the FPT-Cognition thesis as an alternative formalization of the Tractable Cognition thesis (here, FPT stands for fixed-parameter tractable). Possible objections to the Tractable Cognition thesis, and its proposed formalization, are discussed, and existing misconceptions are clarified.
Towards Self-Tuning Memory Management for Data Servers Although today's computers provide huge amounts of main memory, the ever-increasing load of large data servers, imposed by resource-intensive decision-support queries and accesses to multimedia and other complex data, often leads to memory contention and may result in severe performance degrada- tion. Therefore, careful tuning of memory mangement is crucial for heavy-load data servers. This paper gives an overview of self-tuning methods for a spectrum of memory management issues, ranging from traditional caching to exploiting distributed memory in a server cluster and speculative prefetching in a Web-based system. The common, fundamental elements in these methods include on-line load track- ing, near-future access prediction based on stochastic models and the available on-line statistics, and dynamic and automatic adjustment of control parameters in a feedback loop.
Spatial computing: an emerging paradigm for autonomic computing and communication Emerging distributed computing scenarios call for novel “autonomic” approaches to distributed systems development and management. In this position paper we analyze the distinguishing characteristics of those scenarios, discuss the inadequacy of traditional paradigms, and elaborate on primary role of “space” in modern distributed computing. In particular, we show that spatial abstractions promise to be basic necessary ingredients for a novel “spatial computing” paradigm, acting as a unifying framework for autonomic computing and communication. On this base, we propose a preliminary “spatial computing stack” to frame the key concepts and mechanisms of spatial computing. Eventually, we try to sketch a research agenda in the area.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.072278
0.066667
0.066667
0.02
0.003065
0.000161
0.000014
0.000005
0.000001
0
0
0
0
0
Learning two-pathway convolutional neural networks for categorizing scene images. Scenes are closely related to the kinds of objects that may appear in them. Objects are widely used as features for scene categorization. On the other hand, landscapes with more spatial structures of scenes are representative of scene categories. In this paper, we propose a deep learning based algorithm for scene categorization. Specifically, we design two-pathway convolutional neural networks for exploiting both object attributes and spatial structures of scene images. Different from conventional deep learning methods, which usually focus on only one aspect of images, each pathway of the proposed architecture is tuned to capture a different aspect of images. As a result, complementary information of image contents can be utilized effectively. In addition, to deal with the feature redundancy problem caused by combining features from different sources, we adopt the ℓ2,1 norm during classifier training to control selectivity of each type of features. Extensive experiments are conducted to evaluate the proposed method. Obtained results demonstrate that the proposed approach achieves superior performances over conventional methods. Moreover, the proposed method is a general framework, which can be easily extended to more pathways and applied to solve other problems.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Dynamic Duty-Cycle Control for Wireless Sensor Networks Using Artificial Neural Network (ANN) Efficient management of duty-cycle in Wireless Sensor Networks (WSN) results in considerable performance improvement of MAC protocols by catering to the issues of overhearing and idle listening. The researchers' interest has recently shifted to adaptive control of duty-cycle which offers an opportunity to conserve energy of WSN in highly dynamic environments, without external intervention. Machine learning techniques such as Artificial Neural Network (ANN) have been deployed for embedding artificial decision making capabilities in WSN. This paper integrated ANN with WSN to achieve dynamic duty-cycle control in WSN, at the receiver end. ANN is used to predict the data arrivals instants at which the receiver should wake up. The scheme has been implemented in MATLAB and the results revealed significant improvement in terms of delay and energy as compared to if the fixed wakeup and sleep intervals are used.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Generating user interfaces from data models and dialogue net specifications A method and a set of supporting tools have been developed for an improved integration of user interface design with software engineering methods and tools. Animated user interfaces for database-oriented applications are generated from an extended data model and a new graphical technique for specifying dialogues. Based on views defined for the data model, an expert system uses explicit design rules derived from existing guidelines for producing the static layout of the user interface. A petri net based technique called dialogue nets is used for specifying the dynamic behaviour. Output is generated for an existing user interface management system. The approach supports rapid prototyping while using the advantages of standard software engineering methods.
User interface declarative models and development environments: a survey Model-Based User Interface Development Environments (MB-UIDEs) provide a context within which user interface declarative models can be constructed and related, as part of the user interface design process. This paper provides a review of MB-UIDE technologies. A framework for describing the elements of a MB-UIDE is presented. A representative collection of 14 MB-UIDEs are selected, described in terms of the framework, compared and analysed from the information available in the literature. The framework can be used as an introduction to the MB-UIDE technology since it relates and provides a description for the terms used in MB-UIDE papers.
MIKE: the menu interaction kontrol environment A User Interface Management System (UIMS) called MIKE that does not use the syntactic specifications found in most UIMSs is described. Instead, MIKE provides a default syntax that is automatically generated from the definition of the semantic commands that the interaction is to support. The default syntax is refined using an interface editor that allows modification of the presentation of the interface. It is shown how active pictures can be created by adding action expressions to the viewports of MIKE's windowing system. The implications of MIKE's command-based dialogue description are discussed in terms of extensible interfaces, device and dialogue-style independence, and system support functions.
From OOA to GUIs: The JANUS System
History, Results, and Bibliography of the User Interface Design Environment (UIDE), an Early Model-based System for User Interface Design and Implementation
PLUG-IN: using Tcl/Tk for plan-based user guidance PLUG-IN (PLan-Based User Guidance for Intelligent Navigation) is a user guidance component supporting the user of interactive applications. It generates dynamical on-line help pages and animation sequences on the fly. On the dynamical help pages textual help for the user is displayed whereas the animation sequences are used to show how the user can interact with the application. In our presentation we demonstrate the given user support by looking at the user interface of an ISDN telephone application and discuss the underlying Tcl/Tk concepts of PLUG-IN.
Multi-threading and remote latency in software DSMs This paper evaluates the use of per-node multi-threading to hide remote memory and synchronization latencies in a software DSM. As with hardware systems, multi-threading in software systems can be used to reduce the costs of remote requests by switching threads when the current thread blocks. We added multi-threading to the CVM software DSM and evaluated its impact on performance for a suite of common shared memory programs. Multi-threading resulted in speed improvements of at least 17% in three of the seven applications in our suite, and lesser improvements in the other applications. However, we found that: good performance is not always achievable transparently for non-trivial applications; multi-threading can negatively interact with DSM operations; multi-threading decreases cache and TLB locality; and any multi-threading speedup is dependent on available work.
I/O reference behavior of production database workloads and the TPC benchmarks—an analysis at the logical level As improvements in processor performance continue to far outpace improvements in storage performance, I/O is increasingly the bottleneck in computer systems, especially in large database systems that manage huge amoungs of data. The key to achieving good I/O performance is to thoroughly understand its characteristics. In this article we present a comprehensive analysis of the logical I/O reference behavior of the peak productiondatabase workloads from ten of the world's largest corporations. In particular, we focus on how these workloads respond to different techniques for caching, prefetching, and write buffering. Our findings include several broadly applicable rules of thumb that describe how effective the various I/O optimization techniques are for the production workloads. For instance, our results indicate that the buffer pool miss ratio tends to be related to the ratio of buffer pool size to data size by an inverse square root rule. A similar fourth root rule relates the write miss ratio and the ration of buffer pool size to data size.In addition, we characterize the reference characteristics of workloads similar to the Transaction Processing Performance Council (TPC) benchmarks C (TPC-C) and D(TPC-D), which are de facto standard performance measures for online transaction processing (OLTP) systems and decision support systems (DSS), respectively. Since benchmarks such as TPC-C and TPC-D can only be used effectively if their strengths and limitations are understood, a major focus of our analysis is to identify aspects of the benchmarks that stress the system differently than the production workloads. We discover that for the most part, the reference behavior of TPC-C and TPC-D fall within the range of behavior exhibited by the production workloads. However, there are some noteworthy exceptions that affect well-known I/O optimization techniques such as caching (LRU is further from the optimal for TPC-C, while there is little sharing of pages between transactions for TPC-D), prefetching (TPC-C exhibits no significant sequentiality), and write buffering (write buffering is lees effective for the TPC benchmarks). While the two TPC benchmarks generally complement one another in reflecting the characteristics of the production workloads, there remain aspects of the real workloads that are not represented by either of the benchmarks.
Informed multi-process prefetching and caching Informed prefetching and caching based on application disclosure of future I/O accesses (hints) can dramatically reduce the execution time of I/O-intensive applications. A recent study showed that, in the context of a single hinting application, prefetching and caching algorithms should adapt to the dynamic load on the disks to obtain the best performance. In this paper, we show how to incorporate adaptivity to disk load into the TIP2 system, which uses cost-benefit analysis to allocate global resources among multiple processes. We compare the resulting system, which we call TIPTOE (TIP with Temporal Overload Estimators) to Cao et al's LRU-SP allocation scheme, also modified to include adaptive prefetching. Using disk-accurate trace-driven simulation we show that, averaged over eleven experiments involving pairs of hinting applications, and with data striped over one to ten disks, TIPTOE delivers 7% lower execution time than LRU-SP. Where the computation and I/O demands of each experiment are closely matched, in a two-disk array, TIPTOE delivers 18% lower execution time.
Towards Self-Tuning Memory Management for Data Servers Although today's computers provide huge amounts of main memory, the ever-increasing load of large data servers, imposed by resource-intensive decision-support queries and accesses to multimedia and other complex data, often leads to memory contention and may result in severe performance degrada- tion. Therefore, careful tuning of memory mangement is crucial for heavy-load data servers. This paper gives an overview of self-tuning methods for a spectrum of memory management issues, ranging from traditional caching to exploiting distributed memory in a server cluster and speculative prefetching in a Web-based system. The common, fundamental elements in these methods include on-line load track- ing, near-future access prediction based on stochastic models and the available on-line statistics, and dynamic and automatic adjustment of control parameters in a feedback loop.
Systemic Nonlinear Planning This paper presents a simple, sound, complete, and systematic algorithm for domain independent STRIPS planning. Simplicity is achieved by starting with a ground procedure and then applying a general, and independently verifiable, lifting transformation. Previous planners have been designed directly as lifted procedures. Our ground procedure is a ground version of Tate''s NONLIN procedure. In Tate''s procedure one is not required to determine whether a prerequisite of a step in an unfinished plan is guaranteed to hold in all linearizations. This allows Tate''s procedure to avoid the use of Chapman''s modal truth criterion. Systematicity is the property that the same plan, or partial plan, is never examined more than once.
What regularized auto-encoders learn from the data-generating distribution. What do auto-encoders learn about the underlying data-generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data-generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input). It contradicts previous interpretations of reconstruction error as an energy function. Unlike previous results, the theorems provided here are completely generic and do not depend on the parameterization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood because it does not involve a partition function. Finally, we show how an approximate Metropolis-Hastings MCMC can be setup to recover samples from the estimated distribution, and this is confirmed in sampling experiments.
Two-Layer Multiple Kernel Learning.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.025027
0.035248
0.025009
0.021225
0.016713
0.010591
0.000004
0.000001
0
0
0
0
0
0
Generating Project Networks Procedures for optimization and resource allocation in Operations Research first require a project network for the task to be specified. The specification of a project network is at present done in an intuitive way. AI work in plan formation has developed formalisms for specifying primitive activities, and recent work by Sacerdoti (1975a) has developed a planner able to generate a plan as a partially ordered network of actions. The "planning: a joint AI/OR approach" project at Edinburgh has extended such work and provided a hierarchic planner which can aid in the generation of project networks. This paper describes the planner (NONLIN) and the Task Formalism (TF) used to hierarchically specify a domain.
A review of machine learning for automated planning. Recent discoveries in automated planning are broadening the scope of planners, from toy problems to real applications. However, applying automated planners to real-world problems is far from simple. On the one hand, the definition of accurate action models for planning is still a bottleneck. On the other hand, off-the-shelf planners fail to scale-up and to provide good solutions in many domains. In these problematic domains, planners can exploit domain-specific control knowledge to improve their performance in terms of both speed and quality of the solutions. However, manual definition of control knowledge is quite difficult. This paper reviews recent techniques in machine learning for the automatic definition of planning knowledge. It has been organized according to the target of the learning process: automatic definition of planning action models and automatic definition of planning control knowledge. In addition, the paper reviews the advances in the related field of reinforcement learning.
A New Algorithm for Generative Planning Existing generative planners have two properties that one would like to avoid if possible. First, they use a single mechanism to solve problems both of action selection and of action sequencing, thereby failing to exploit recent progress on scheduling and satisfiability algorithms. Second, the context in which a subgoal is solved is governed in part by the solutions to other subgoals, as opposed to plans for the subgoals being developed in isolation and then merged to yield a plan for the conjunction. We present a reformulation of the planning problem that appears to avoid these difficulties, describing an algorithm that solves subgoals in isolation and then appeals to a separate NP-complete scheduling test to determine whether the actions that have been selected can be combined in a useful way.
Synthesizing plans that contain actions with context-dependent effects
Domain-independent planning: representation and plan generation A domain-independent planning program that supports both automatic and interactive generation of hierarchical, partially ordered plans is described. An improved formalism makes extensive use of constraints and resources to represent domains and actions more powerfully. The formalism also offers efficient methods for representing properties of objects that do not change over time, allows specification of the plan rationale (which includes scoping of conditions and appropriately relating different levels in the hierarchy), and provides the ability to express deductive rules for deducing the effects of actions. The implications of allowing parallel actions in a plan or problem solution are discussed, and new techniques for efficiently detecting and remedying harmful parallel interactions are presented. The most important of these techniques, reasoning about resources, is emphasized and explained. The system supports concurrent exploration of different branches in the search, making best-first search easy to implement.
Systemic Nonlinear Planning This paper presents a simple, sound, complete, and systematic algorithm for domain independent STRIPS planning. Simplicity is achieved by starting with a ground procedure and then applying a general, and independently verifiable, lifting transformation. Previous planners have been designed directly as lifted procedures. Our ground procedure is a ground version of Tate''s NONLIN procedure. In Tate''s procedure one is not required to determine whether a prerequisite of a step in an unfinished plan is guaranteed to hold in all linearizations. This allows Tate''s procedure to avoid the use of Chapman''s modal truth criterion. Systematicity is the property that the same plan, or partial plan, is never examined more than once.
Temporal data base management Reasoning about time typically involves drawing conclusions on the basis of incomplete information. Uncertainty arises in the form of ignorance, indeterminacy, and indecision. Despite the lack of complete information, a problem solver is continually forced to make predictions in order to pursue hypotheses and plan for the future. Such predictions are frequently contravened by subsequent evidence. This paper presents a computational approach to temporal reasoning that directly confronts these issues. The approach centers around techniques for managing a data base of assertions corresponding to the occurrence of events and the persistence of their effects over time. The resulting computational framework performs the temporal analog of (static) reason maintenance by keeping track of dependency information involving assumptions about the truth of facts spanning various intervals of time. The system described in this paper extends classical predicate-calculus data bases, such as those used by PROLOG, to deal with time in an efficient and natural manner.
Action Languages Action languages are formal models of parts of the natural languagethat are used for talking about the effects of actions. This article is acollection of definitions related to action languages that may be usefulas a reference in future publications.1 IntroductionThis article is a collection of definitions related to action languages. Itdoes not provide a comprehensive discussion of the subject, and does notcontain a complete bibliography, but it may be useful as a reference in...
Compiling uncertainty away in conformant planning problems with bounded width Conformant planning is the problem of finding a sequence of actions for achieving a goal in the presence of uncertainty in the initial state or action effects. The problem has been approached as a path-finding problem in belief space where good belief representations and heuristics are critical for scaling up. In this work, a different formulation is introduced for conformant problems with deterministic actions where they are automatically converted into classical ones and solved by an off-the-shelf classical planner. The translation maps literals L and sets of assumptions t about the initial situation, into new literals KL/t that represent that L must be true if t is initially true. We lay out a general translation scheme that is sound and establish the conditions under which the translation is also complete. We show that the complexity of the complete translation is exponential in a parameter of the problem called the conformant width, which for most benchmarks is bounded. The planner based on this translation exhibits good performance in comparison with existing planners, and is the basis for T0, the best performing planner in the Conformant Track of the 2006 International Planning Competition.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
The nature of statistical learning theory~. First Page of the Article
A Relationship between Difference Hierarchies and Relativized Polynomial Hierarchies. Chang and Kadin have shown that if the difference hierarchy over NP collapses to level $k$, then the polynomial hierarchy (PH) is equal to the $k$th level of the difference hierarchy over $\Sigma_{2}^{p}$. We simplify their proof and obtain a slightly stronger conclusion: If the difference hierarchy over NP collapses to level $k$, then PH = $\left(P_{(k-1)-tt}^{NP}\right)^{NP}$. We also extend the result to classes other than NP: For any class $C$ that has $\leq_{m}^{p}$-complete sets and is closed under $\leq_{conj}^{p}$and $\leq_{m}^{NP}$-reductions, if the difference hierarchy over $C$ collapses to level $k$, then $PH^{C} = $\left(P_{(k-1)-tt}^{NP}\right)^{C}$. Then we show that the exact counting class $C_{=}P$ is closed under $\leq_{disj}^{p}$and $\leq_{m}^{co-NP}$-reductions. Consequently, if the difference hierarchy over $C_{=}P$ collapses to level $k$ then $PH^{PP}$ is equal to $\left(P_{(k-1)-tt}^{NP}\right)^{PP}$. In contrast, the difference hierarchy over the closely related class PP is known to collapse. Finally, we consider two ways of relativizing the bounded query class $P_{k-tt}^{NP}$: the restricted relativization $P_{k-tt}^{NP^{C}}$, and the full relativization $\left(P_{k-tt}^{NP}\right)^{C}$. If $C$ is NP-hard, then we show that the two relativizations are different unless $PH^{C}$ collapses.
Continuous Data Block Placement in and Elevation from Tertiary Storage in Hierarchical Storage Servers Given the cost of memories and the very large storage and bandwidth requirements of large-scale multimedia databases, hierarchical storage servers (which consist of disk-based secondary storage and tape-library-based tertiary storage) are becoming increasingly popular. Such server applications rely upon tape libraries to store all media, exploiting their excellent storage capacity and cost per MB characteristics. They also rely upon disk arrays, exploiting their high bandwidth, to satisfy a very large number of requests. Given typical access patterns and server configurations, the tape drives are fully utilized uploading data for requests that “fall through” to the tertiary level. Such upload operations consume significant secondary storage device and bus bandwidth. In addition, with present technology (and trends) the disk array can serve fewer requests to continuous objects than it can store, mainly due to IO and/or backplane bus bandwidth limitations. In this work we address comprehensively the performance of these hierarchical, continuous-media, storage servers by looking at all three main system resources: the tape drive bandwidth, the secondary-storage bandwidth, and the host's RAM. We provide techniques which, while fully utilizing the tape drive bandwidth (an expensive resource) they introduce bandwidth savings, which allow the secondary storage devices to serve more requests and do so without increasing demands for the host's RAM space. Specifically, we consider the issue of elevating continuous data from its permanent place in tertiary for display purposes. We develop algorithms for sharing the responsibility for the playback between the secondary and tertiary devices and for placing the blocks of continuous objects on tapes, and show how they achieve the above goals. We study these issues for different commercial tape library products with different bandwidth and tape capacity and in environments with and without the multiplexing of tape libraries.
Unsupervised (Parameter) Learning For Mrfs On Bipartite Graphs We consider unsupervised (parameter) learning for general Markov random fields on bipartite graphs. This model class includes Restricted Boltzmann Machines. We show that besides the widely used stochastic gradient approximation (a.k.a. Persistent Contrastive Divergence) there is an alternative learning approach - a modified EM algorithm which is tractable because of the bipartiteness of the model graph. We compare the resulting double loop algorithm and the PCD learning experimentally and show that the former converges faster and more stable than the latter.
1.007943
0.008
0.007472
0.005841
0.004426
0.003009
0.001764
0.000429
0.000057
0.000016
0.000001
0
0
0
Representing Policies for Quantified Boolean Formulae The practical use of Quantified Boolean Formulae (QBFs) often calls for more than solving the validity problem QBF. For this reason we investigate the cor- responding function problems whose expected outputs are policies. QBFs which do not evaluate to true do not have any solution policy, but can be of interest nev- ertheless; for handling them, we introduce a notion of partial policy. We focus on the representation of poli- cies, considering QBFs of the form 8X 9Y �. Because the explicit representation of policies for such QBFs can be of exponential size, descriptions as compact as pos- sible must be looked for. To address this issue, two ap- proaches based on the decomposition and the compila- tion ofare presented.
Propositional fragments for knowledge compilation and quantified boolean formulae Several propositional fragments have been considered so far as target languages for knowledge compilation and used for improving computational tasks from major AI areas (like inference, diagnosis and planning); among them are the (quite influential) ordered binary decision diagrams, prime implicates, prime implicants, "formulae" in decomposable negation normal form. On the other hand, the validity problem QBF for Quantified Boolean Formulae (QBF) has been acknowledged for the past few years as an important issue for AI, and many solvers have been designed for this purpose. In this paper, the complexity of restrictions of QBF obtained by imposing the matrix of the input QBF to belong to such propositional fragments is identified. Both tractability and intractability results (PSPACE-completeness) are obtained.
A first step towards a unified proof checker for QBF Compared to SAT, there is no simple concept of what a solution to a QBF problem is. Furthermore, as the series of QBF evaluations shows, the QBF solvers that are available often disagree. Thus, proof generation for QBF seems to be even more important than for SAT. In this paper we propose a new uniform proof format, which captures refutations and witnesses for a variety of QBF solvers, and is based on a novel extended resolution rule for QBF. Our experiments show the flexibility of this new format. We also identify shortcomings of our format and conjecture that a purely resolution based proof calculus is not powerful enough to trace the most efficient solvers.
Extracting certificates from quantified boolean formulas A certificate of satisfiability for a quantified boolean formula is a compact representation of one of its models which is used to provide solver-independent evidence of satisfiability. In addition, it can be inspected to gather explicit information about the semantics of the formula. Due to the intrinsic nature of quantified formulas, such certificates demand much care to be efficiently extracted, compactly represented, and easily queried. We show how to solve all these problems.
Symbolic Decision Procedures for QBF Much recent work has gone into adapting techniques that were originally developed for SAT solving to QBF solving. In particular, QBF solvers are often based on SAT solvers. Most competitive QBF solvers are search-based. In this work we explore an alternative approach to QBF solving, based on symbolic quantifier elimination. We extend some recent symbolic approaches for SAT solving to symbolic QBF solving, using various decision-diagram formalisms such as OBDDs and ZDDs. In both approaches, QBF formulas are solved by eliminating all their quantifiers. Our first solver, QMRES, maintains a set of clauses represented by a ZDD and eliminates quantifiers via multi-resolution. Our second solver, QBDD, maintains a set of OBDDs, and eliminate quantifier by applying them to the underlying OBDDs. We compare our symbolic solvers to several competitive search-based solvers. We show that QBDD is not competitive, but QMRES compares favorably with search-based solvers on various benchmarks consisting of non-random formulas.
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
On the Desirability of Acyclic Database Schemes A class of database schemes, called acychc, was recently introduced. It is shown that this class has a number of desirable properties. In particular, several desirable properties that have been studied by other researchers m very different terms are all shown to be eqmvalent to acydicity. In addition, several equivalent charactenzauons of the class m terms of graphs and hypergraphs are given, and a smaple algorithm for determining acychclty is presented. Also given are several eqmvalent characterizations of those sets M of multivalued dependencies such that M is the set of muRlvalued dependencies that are the consequences of a given join dependency. Several characterizations for a conflict-free (in the sense of Lien) set of muluvalued dependencies are provided.
Consensus and Cooperation in Networked Multi-Agent Systems? This paper provides a theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow, robustness to changes in network topology due to link/node failures, time-delays, and performance guarantees. An overview of basic concepts of information consensus in networks and methods of convergence and performance analys...
Reasoning about action I: a possible worlds approach Reasoning about change is an important aspect of commonsense reasoning and planning.In this paper we describe an approach to reasoning about change for rich domains whereit is not possible to anticipate all situations that might occur. The approach provides asolution to the frame problem, and to the related problem that it is not always reasonable toexplicitly specify all of the consequences of actions. The approach involves keeping a singlemodel of the world that is updated when actions...
Compilability of Domain Descriptions in the Language A
Multi-threading and remote latency in software DSMs This paper evaluates the use of per-node multi-threading to hide remote memory and synchronization latencies in a software DSM. As with hardware systems, multi-threading in software systems can be used to reduce the costs of remote requests by switching threads when the current thread blocks. We added multi-threading to the CVM software DSM and evaluated its impact on performance for a suite of common shared memory programs. Multi-threading resulted in speed improvements of at least 17% in three of the seven applications in our suite, and lesser improvements in the other applications. However, we found that: good performance is not always achievable transparently for non-trivial applications; multi-threading can negatively interact with DSM operations; multi-threading decreases cache and TLB locality; and any multi-threading speedup is dependent on available work.
The closure of monadic NP It is a well-known result of Fagin that the complexity class NP coincides with the class of problems expressible in existential second-order logic ( Σ 1 1 ), which allows sentences consisting of a string of existential second-order quantifiers followed by a first-order formula. Monadic NP is the class of problems expressible in monadic Σ 1 1 , i.e., Σ 1 1 with the restriction that the second-order quantifiers are all unary and hence range only over sets (as opposed to ranging over, say, binary relations). For example, the property of a graph being 3-colorable belongs to monadic NP, because 3-colorability can be expressed by saying that there exists three sets of vertices such that each vertex is in exactly one of the sets and no two vertices in the same set are connected by an edge. Unfortunately, monadic NP is not a robust class, in that it is not closed under first-order quantification. We define closed monadic NP to be the closure of monadic NP under first-order quantification and existential unary second-order quantification. Thus, closed monadic NP differs from monadic NP in that we allow the possibility of arbitrary interleavings of first-order quantifiers among the existential unary second-order quantifiers. We show that closed monadic NP is a natural, rich, and robust subclass of NP. As evidence for its richness, we show that not only is it a proper extension of monadic NP, but that it contains properties not in various other extensions of monadic NP. In particular, we show that closed monadic NP contains an undirected graph property not in the closure of monadic NP under first-order quantification and Boolean operations. Our lower-bound proofs require a number of new game-theoretic techniques.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.1
0.05
0.025
0.014286
0
0
0
0
0
0
0
0
0
The acceptance and use of computer based assessment The effective development of a computer based assessment (CBA) depends on students' acceptance. The purpose of this study is to build a model that demonstrates the constructs that affect students' behavioral intention to use a CBA. The proposed model, Computer Based Assessment Acceptance Model (CBAAM) is based on previous models of technology acceptance such as Technology Acceptance Model (TAM), Theory of Planned Behavior (TPB), and the Unified Theory of Acceptance and Usage of Technology (UTAUT). Constructs from previous models were used such as Perceived Usefulness, Perceived Ease of Use, Computer Self Efficacy, Social Influence, Facilitating Conditions and Perceived Playfulness. Additionally, two new variables, Content and Goal Expectancy, were added to the proposed research model. Data were collected from 173 participants in an introductory informatics course using a survey questionnaire. Partial Least Squares (PLS) was used to test the measurement and the structural model. Results indicate that Perceived Ease of Use and Perceived Playfulness have a direct effect on CBA use. Perceived Usefulness, Computer Self Efficacy, Social Influence, Facilitating Conditions, Content and Goal Expectancy have only indirect effects. These eight variables explain approximately 50% of the variance of Behavioural Intention.
Measuring self-regulation in online and blended learning environments In developing the Online Self-regulated Learning Questionnaire (OSLQ) to address the need for an instrument measuring self-regulation in the online learning environment, this study provides evidence toward the reliability and validity of the instrument. Data were collected from two samples of students. The first sample of students took coursework using an online course format while a second sample of students took coursework delivered via a blended or hybrid course format. Cronbach alpha (α) and confirmatory factor analyses were performed to assess the psychometric properties of the OSLQ across both samples of students. Results indicate the OSLQ is an acceptable measure of self-regulation in the online and blended learning environments.
The influence of achievement goals and perceptions of online help on its actual use in an interactive learning environment This study explored the influence of achievement goals and perceptions of help-seeking on a learner's actual use of help in an interactive learning environment. After being shown a web site on statistics, 49 psychology students answered a questionnaire on achievement goals and their perceptions of help-seeking. They were then asked to solve statistics problems in an interactive learning environment. In this environment, they were allowed to use instrumental and executive help. The results showed that high mastery goals were related to a high perception of a threat to the learner's autonomy but not to the use of help. Performance goals were positively related to a perception of threat of not being considered competent and negatively related to the use of help and especially instrumental help. The implications of these results for future research on the help-seeking process in an interactive learning environment are discussed particularly in relation to the Technology Acceptance Model of the use of help.
Mining theory-based patterns from Big data: Identifying self-regulated learning strategies in Massive Open Online Courses. Big data in education offers unprecedented opportunities to support learners and advance research in the learning sciences. Analysis of observed behaviour using computational methods can uncover patterns that reflect theoretically established processes, such as those involved in self-regulated learning (SRL). This research addresses the question of how to integrate this bottom-up approach of mining behavioural patterns with the traditional top-down approach of using validated self-reporting instruments. Using process mining, we extracted interaction sequences from fine-grained behavioural traces for 3458 learners across three Massive Open Online Courses. We identified six distinct interaction sequence patterns. We matched each interaction sequence pattern with one or more theory-based SRL strategies and identified three clusters of learners. First, Comprehensive Learners, who follow the sequential structure of the course materials, which sets them up for gaining a deeper understanding of the content. Second, Targeting Learners, who strategically engage with specific course content that will help them pass the assessments. Third, Sampling Learners, who exhibit more erratic and less goal-oriented behaviour, report lower SRL, and underperform relative to both Comprehensive and Targeting Learners. Challenges that arise in the process of extracting theory-based patterns from observed behaviour are discussed, including analytic issues and limitations of available trace data from learning platforms.
Self-regulated learning strategies predict learner behavior and goal attainment in Massive Open Online Courses. Individuals with strong self-regulated learning (SRL) skills, characterized by the ability to plan, manage and control their learning process, can learn faster and outperform those with weaker SRL skills. SRL is critical in learning environments that provide low levels of support and guidance, as is commonly the case in Massive Open Online Courses (MOOCs). Learners can be trained to engage in SRL and actively supported with prompts and activities. However, effective implementation of learner support systems in MOOCs requires an understanding of which SRL strategies are most effective and how these strategies manifest in online behavior. Moreover, identifying learner characteristics that are predictive of weaker SRL skills can advance efforts to provide targeted support without obtrusive survey instruments. We investigated SRL in a sample of 4,831 learners across six MOOCs based on individual records of overall course achievement, interactions with course content, and survey responses. We found that goal setting and strategic planning predicted attainment of personal course goals, while help seeking was associated with lower goal attainment. Learners with stronger SRL skills were more likely to revisit previously studied course materials, especially course assessments. Several learner characteristics, including demographics and motivation, predicted learners' SRL skills. We discuss implications for theory and the development of learning environments that provide adaptive support. Goal setting and strategic planning positively predict goal attainment in MOOCs.Help seeking negatively predicts goal attainment, e.g., earning a certificate.Self-reported SRL strategies manifest behaviorally in revisiting course content.Learner characteristics (demographics, motivation, etc.) predict self-reported SRL.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Evaluating collaborative filtering recommender systems Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.
Practical Issues in Temporal Difference Learning This paper examines whether temporal difference methods for training connectionist networks, such as Sutton's TD(λ) algorithm, can be successfully applied to complex real-world problems. A number of important practical issues are identified and discussed from a general theoretical perspective. These practical issues are then examined in the context of a case study in which TD(λ) is applied to learning the game of backgammon from the outcome of self-play. This is apparently the first application of this algorithm to a complex non-trivial task. It is found that, with zero knowledge built in, the network is able to learn from scratch to play the entire game at a fairly strong intermediate level of performance, which is clearly better than conventional commercial programs, and which in fact surpasses comparable networks trained on a massive human expert data set. This indicates that TD learning may work better in practice than one would expect based on current theory, and it suggests that further analysis of TD methods, as well as applications in other complex domains, may be worth investigating.
The complexity of combinatorial problems with succinct input representation Several languages for the succinct representation of the instances of combinatorial problems are investigated. These languages have been introduced in [20, 2] and [5] where it has been shown that describing the instances by these languages causes a blow-up of the complexities of some problems. In the present paper the descriptional power of these languages is compared by estimating the complexities of some combinatorial problems in terms of completeness in suitable classes of the “counting polynomial-time hierarchy” which is introduced here. It turns out that some of the languages are not comparable, unless P=NP Some problems left open in [2] are solved.
Recognizing frozen variables in constraint satisfaction problems In constraint satisfaction problems over finite domains, some variables can be frozen, that is, they take the same value in all possible solutions. We study the complexity of the problem of recognizing frozen variables with restricted sets of constraint relations allowed in the instances. We show that the complexity of such problems is determined by certain algebraic properties of these relations. Under the assumption that NP ≠ coNP (and consequently PTIME ≠ NP), we characterize all tractable problems, and describe large classes of NP-complete, coNP-complete, and DP-complete problems. As an application of these results, we completely classify the complexity of the problem in two cases: (1) with domain size 2; and (2) when all unary relations are present. We also give a rough classification for domain size 3.
Scheduling a mixed interactive and batch workload on a parallel, shared memory supercomputer
Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription. We investigate the problem of modeling symbolic sequences of polyphonic music in a completely general piano-roll representation. We introduce a probabilistic model based on distribution estimators conditioned on a recurrent neural network that is able to discover temporal dependencies in high-dimensional sequences. Our approach outperforms many traditional models of polyphonic music on a variety of realistic datasets. We show how our musical language model can serve as a symbolic prior to improve the accuracy of polyphonic transcription.
Wsben: A Web Services Discovery And Composition Benchmark Toolkit In this article, a novel benchmark toolkit, WSBen, for testing web services discovery and composition algorithms is presented. The WSBen includes: (1) a collection of synthetically generated web services files in WSDL format with diverse data and model characteristics; (2) queries for testing discovery and composition algorithms; (3) auxiliary files to do statistical analysis on the WSDL test sets; (4) converted WSDL test sets that conventional AI planners can read; and (5) a graphical interface to control all these behaviors. Users can fine-tune the generated WSDL test files by varying underlying network models. To illustrate the application of the WSBen, in addition, we present case studies from three domains: (1) web service composition; (2) AI planning; and (3) the laws of networks in Physics community. It is our hope that WSBen will provide useful insights in evaluating the performance of web services discovery and composition algorithms. The WSBen toolkit is available at: http://pike.psu.edu/sw/wsben/.
Improving Citation Polarity Classification With Product Reviews Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification.
1.22
0.22
0.22
0.22
0.073333
0
0
0
0
0
0
0
0
0
Minimizing interference through application mapping in multi-level buffer caches In this paper, we study the impact of cache sharing on co-mapped applications in multi-level buffer cache hierarchies. When the number of applications exceeds the number of resources, resource sharing is inevitable. However, unless applications are co-mapped carefully, destructive interference may cause applications to thrash and spend most of their time paging data to and from disks. We propose two novel models which predict the performance of an application in the presence of other applications and an algorithm which uses the output of these models to perform application-to-node mapping in a multi-level buffer cache hierarchy. Our models use the reuse distances of the application reference streams and their respective I/O rates. This information can be obtained either online or offline. Our main advantage is that we do not require profile information of all application pairs to predict their interferences. The goal of this mapping is to minimize destructive interference during execution. We validate the effectiveness of our models and mapping scheme using several I/O-intensive applications, and found that the error in prediction of our two models is only 3.9% and 2.7% respectively, on average. Further, using our approach, we were effectively able to co-map applications to maximize the performance of the buffer cache hierarchy by 43.6% and 56.8% on average over the median and worst mappings respectively in the entire I/O stack.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Symbolic computational techniques for solving games Games are useful in modular specification and analysis of systems where the distinction among the choices controlled by different components (for instance, the system and its environment) is made explicit. In this paper, we formulate and compare various symbolic com- putational techniques for deciding the existence of win- ning strategies. The game structure is given implicitly, and the winning condition is either a reachability game of the form "p until q" (for state predicates p and q )o r a safety game of the form "Always p." For reachability games, the first technique employs symbolic fixed-point computation using ordered binary decision diagrams (BDDs) (9). The second technique checks for the existence of strategies that ensure winning within k steps, for a user-specified bound k, by reduction to the satisfiability of quantified boolean formulas. Fi- nally, the bounded case can also be solved by reduction to satisfiability of ordinary boolean formulas, and we discuss two techniques, one based on encoding the strategy tree and one based on encoding a witness subgraph, for reduc- tion to Sat. We also show how some of these techniques can be adopted to solve safety games. We compare the various approaches by evaluating them on two examples for reachability games, and on an interface synthesis ex- ample for a fragment of TinyOS (15) for safety games. We use existing tools such as Mocha (4), Mucke (7), Sem- prop (19), Qube (12), and Berkmin (13) and contrast the results.
Analysis of search based algorithms for satisfiability of propositional and quantified boolean formulas arising from circuit state space diameter problems The sequential circuit state space diameter problem is an important problem in sequential verification. Bounded model checking is complete if the state space diameter of the system is known. By unrolling the transition relation, the sequential circuit state space diameter problem can be formulated as either a series of Boolean satisfiability (SAT) problems or an evaluation for satisfiability of a Quantified Boolean Formula (QBF). Thus far neither the SAT based technique that uses sophisticated SAT solvers, nor QBF evaluations for the various QBF formulations for this have fared well in practice. The poor performance of the QBF evaluations is blamed on the relative immaturity of QBF solvers, with hope that ongoing research in QBF solvers could lead to practical success here. Most existing QBF algorithms, such as those based on the DPLL SAT algorithm, are search based. We show that using search based QBF algorithms to calculate the state space diameter of sequential circuits with existing problem formulations is no better than using SAT to solve this problem. This result holds independent of the representation of the QBF formula. This result is important as it highlights the need to explore non-search based or hybrid of search and non-search based QBF algorithms for the sequential circuit state space diameter problem.
Learning for quantified boolean logic satisfiability Learning, i.e., the ability to record and exploit some information which is unveiled during the search, proved to be a very effective AI technique for problem solving and, in particular, for constraint satisfaction. We introduce learning as a general purpose technique to improve the performances of decision procedures for Quantified Boolean Formulas (QBFs). Since many of the recently proposed decision procedures for QBFs solve the formula using search methods, the addition of learning to such procedures has the potential of reducing useless explorations of the search space. To show the applicability of learning for QBF satisfiability we have implemented it in QUBE, a state-of-the-art QBF solver. While the backjumping engine embedded in QUBE provides a good starting point for our task, the addition of learning required us to devise new data structures and led to the definition and implementation of new pruning strategies. We report some experimental results that witness the effectiveness of learning. Noticeably, QUBE augmented with learning is able to solve instances that were previously out if its reach. To the extent of our knowledge, this is the first time that learning is proposed, implemented and tested for QBFs satisfiability.
Towards a Symmetric Treatment of Satisfaction and Conflicts in Quantified Boolean Formula Evaluation In this paper, we describe a new framework for evaluating Quantified Boolean Formulas (QBF). The new framework is based on the Davis-Putnam (DPLL) search algorithm. In existing DPLL based QBF algorithms, the problem database is represented in Conjunctive Normal Form (CNF) as a set of clauses, implications are generated from these clauses, and backtracking in the search tree is chronological. In this work, we augment the basic DPLL algorithm with conflict driven learning as well as satisfiability directed implication and learning. In addition to the traditional clause database, we add a cube database to the data structure. We show that cubes can be used to generate satisfiability directed implications similar to conflict directed implications generated by the clauses. We show that in a QBF setting, conflicting leaves and satisfying leaves of the search tree both provide valuable information to the solver in a symmetric way. We have implemented our algorithm in the new QBF solver Quaffle. Experimental results show that for some test cases, satisfiability directed implication and learning significantly prunes the search.
A machine program for theorem-proving The programming of a proof procedure is discussed in connection with trial runs and possible improvements.
QBF modeling: exploiting player symmetry for simplicity and efficiency Quantified Boolean Formulas (QBFs) present the next big challenge for automated propositional reasoning. Not surprisingly, most of the present day QBF solvers are extensions of successful propositional satisfiability algorithms (SAT solvers). They directly integrate the lessons learned from SAT research, thus avoiding re-inventing the wheel. In particular, they use the standard conjunctive normal form (CNF) augmented with layers of variable quantification for modeling tasks as QBF. We argue that while CNF is well suited to “existential reasoning” as demonstrated by the success of modern SAT solvers, it is far from ideal for “universal reasoning” needed by QBF. The CNF restriction imposes an inherent asymmetry in QBF and artificially creates issues that have led to complex solutions, which, in retrospect, were unnecessary and sub-optimal. We take a step back and propose a new approach to QBF modeling based on a game-theoretic view of problems and on a dual CNF-DNF (disjunctive normal form) representation that treats the existential and universal parts of a problem symmetrically. It has several advantages: (1) it is generic, compact, and simpler, (2) unlike fully non-clausal encodings, it preserves the benefits of pure CNF and leverages the support for DNF already present in many QBF solvers, (3) it doesn't use the so-called indicator variables for conversion into CNF, thus circumventing the associated illegal search space issue, and (4) our QBF solver based on the dual encoding (Duaffle) consistently outperforms the best solvers by two orders of magnitude on a hard class of benchmarks, even without using standard learning techniques.
Quantifier structure in search based procedures for QBFs The best currently available solvers for Quantified Boolean Formulas (QBFs) process their input in prenex form, i.e., all the quantifiers have to appear in the prefix of the formula separated from the purely propositional part representing the matrix. However, in many QBFs deriving from applications, the propositional part is intertwined with the quantifier structure. To tackle this problem, the standard approach is to convert such QBFs in prenex form, thereby loosing structural information about the prefix. In the case of search based solvers, the prenex form conversion introduces additional constraints on the branching heuristic, and reduces the benefits of the learn- ing mechanisms. In this paper we show that conversion to prenex form is not necessary: current search based solvers can be nat- urally extended in order to handle non prenex QBFs and to exploit the original quantifier structure. We highlight the two mentioned drawbacks of the conversion in prenex form with a simple example, and we show that our ideas can be useful also for solving QBFs in prenex form. To validate our claims, we implemented our ideas in the state- of-the-art search based solver QUBE, and conducted an extensive experimental analysis. The results show that very substantial speedups can be obtained.
On Computing Solutions to Belief Change Scenarios Belief change scenarios were recently introduced as a framework for expressing different forms of belief change. In this paper, we show how belief revision and belief contraction (within belief change scenarios) can be axiomatised by means of quantified Boolean formulas. This approach has several benefits. First, it furnishes an axiomatic specification of belief change within belief change scenarios. Second, this axiomatisation allows us to identify upper bounds for the complexity of revision and contraction within belief change scenarios.We strengthen these upper bounds by providing strict complexity results for the considered reasoning tasks. Finally, we obtain an implementation of different forms. of belief change by appeal to the existing system QUIP.
Probabilistic Planning with Information Gathering and Contingent Execution Most AI representations and algorithms for plan generationhave not included the concept of informationproducingactions (also called diagnostics, or tests,in the decision making literature). We present aplanning representation and algorithm that modelsinformation-producing actions and constructs plansthat exploit the information produced by those actions.We extend the buridan (Kushmerick et al.1994) probabilistic planning algorithm, adapting theaction representation to model the...
Concurrent actions in the situation calculus We propose a representation of Concurrent actions; rather than invent a new formalism, we model them within the standard situation calculus by introducing the notions of global actions and primitive actions, whose relationship is analogous to' that between situations and fluents. The result is a framework in which situations and actions play quite symmetric roles. The rich structure of actions gives rise to' a new problem, which, due to' this symmetry between actions and situations, is analogous to' the traditional frame problem. In [Lin and Shoham 1991] we provided a solution to' the frame problem based on a formal adequacy criterion called "epistemological completeness." Here we show how to' solve the new problem based on the same adequacy criterion.
Fine-Grained Mobility in the Emerald System (Extended Abstract)
Efficient Approximations of Conjunctive Queries. When finding exact answers to a query over a large database is infeasible, it is natural to approximate the query by a more efficient one that comes from a class with good bounds on the complexity of query evaluation. In this paper we study such approximations for conjunctive queries. These queries are of special importance in databases, and we have a very good understanding of the classes that admit fast query evaluation, such as acyclic, or bounded (hyper)treewidth queries. We define approximations of a given query Q as queries from one of those classes that disagree with Q as little as possible. We mostly concentrate on approximations that are guaranteed to return correct answers. We prove that for the above classes of tractable conjunctive queries, approximations always exist, and are at most polynomial in the size of the original query. This follows from general results we establish that relate closure properties of classes of conjunctive queries to the existence of approximations. We also show that in many cases, the size of approximations is bounded by the size of the query they approximate. We establish a number of results showing how combinatorial properties of queries affect properties of their approximations, study bounds on the number of approximations, as well as the complexity of finding and identifying approximations. We also look at approximations that return all correct answers and study their properties.
Efficient Placement of Parity and Data to Tolerate Two Disk Failures in Disk Array Systems In this paper, we deal with the data/parity placement problem which is described as follows: how to place data and parity evenly across disks in order to tolerate two disk failures, given the number of disks N and the redundancy rate p which represents the amount of disk spaces to store parity information. To begin with, we transform the data/parity placement problem into the problem of constructing an N脳N matrix such that the matrix will correspond to a solution to the problem. The method to construct a matrix has been proposed and we have shown how our method works through several illustrative examples. It is also shown that any matrix constructed by our proposed method can be mapped into a solution to the placement problem if a certain condition holds between N and p where N is the number of disks and p is a redundancy rate.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.241324
0.061639
0.010836
0.008473
0.003876
0.001537
0.000717
0.000052
0.000006
0
0
0
0
0
Deep Learning for Anomaly Detection: A Survey. Anomaly detection is an important problem that has been well-studied within diverse research areas and application domains. The aim of this survey is two-fold, firstly we present a structured and comprehensive overview of research methods in deep learning-based anomaly detection. Furthermore, we review the adoption of these methods for anomaly across various application domains and assess their effectiveness. We have grouped state-of-the-art research techniques into different categories based on the underlying assumptions and approach adopted. Within each category we outline the basic anomaly detection technique, along with its variants and present key assumptions, to differentiate between normal and anomalous behavior. For each category, we present we also present the advantages and limitations and discuss the computational complexity of the techniques in real application domains. Finally, we outline open issues in research and challenges faced while adopting these techniques.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On logical descriptions of some concepts in structural complexity theory A logical framework is introduced which captures the behaviour of oracle machines and gives logical descriptions of complexity classes that are defined by oracle machines. Using this technique the notion of first-order selfreducibility is investigated and applied to obtain a structural result about non-uniform complexity classes below P.
Logical Characterization of Bounded Query Classes II: Polynomial-Time Oracle Machines We have shown that the logics (±HP)*[FOS] and (±HP)1[FOS] are of the same expressibility, and both capture P∥NP. This result gives us the weakest possible hint that it might be wiser to try and show that (±STC*[FOS] collapses to (±STC)1[FOS] as opposed to trying to show that STC1[FOS] is closed under complementation: an attempt to use the methods of [Imm88] to achieve this latter result has failed (see [BCD89]).
The Boolean hierarchy I: structural properties
Extended stable semantics for normal and disjunctive programs
The nature of statistical learning theory~. First Page of the Article
A machine program for theorem-proving The programming of a proof procedure is discussed in connection with trial runs and possible improvements.
An Introduction to Least Commitment Planning Recent developments have clarified the process of generating partially ordered, partially specified sequences of actions whose execution will achieve an agent's goal. This article summarizes a progression of least commitment planners, starting with one that handles the simple STRIPS representation and ending with UCOPOP a planner that manages actions with disjunctive precondition, conditional effects, and universal quantification over dynamic universes. Along the way, I explain how Chapman's formulation of the modal truth criterion is misleading and why his NP-completeness result for reasoning about plans with conditional effects does not apply to UCOPOP.
Equilibria and steering laws for planar formations This paper presents a Lie group setting for the problem of control of formations, as a natural outcome of the analysis of a planar two-vehicle formation control law. The vehicle trajectories are described using the planar Frenet–Serret equations of motion, which capture the evolution of both the vehicle position and orientation for unit-speed motion subject to curvature (steering) control. The set of all possible (relative) equilibria for arbitrary G-invariant curvature controls is described (where G=SE(2) is a symmetry group for the control law), and a global convergence result for the two-vehicle control law is proved. An n-vehicle generalization of the two-vehicle control law is also presented, and the corresponding (relative) equilibria for the n-vehicle problem are characterized. Work is on-going to discover stability and convergence results for the n-vehicle problem.
Pushing the envelope: planning, propositional logic, and stochastic search Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning problems many times faster than the best current planning systems. Although stochastic methods have been shown to be very effective on a wide range of scheduling problems, this is the first demonstration of its power on truly challenging classical planning instances. This work also provides a new perspective on representational issues in planning.
A Completeness Result for SLDNF-Resolution Because of the possibility of floundering and infinite derivations, SLDNF-resolution is, in general, not complete. The classical approach [17] to get a completeness result is to restrict the attention to normal programs P and normal goals G, such that P or {G} is allowed and P is hierarchical. Unfortunately, the class of all normal programs and all normal goals meeting these requirements is not powerful enough to be of great practical importance. But after refining the concept of allowedness by taking modes [12] into account, we can broaden the notion of a hierarchical program, and thereby define a subclass of the class of normal programs and normal goals which is powerful enough to compute all primitive recursive functions without losing the completeness of SLDNF-resolution.
A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system.
iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.1
0.001399
0
0
0
0
0
0
0
0
0
0
0
V4 Neural Network Model for Shape-Based Feature Extraction and Object Discrimination Visual area V4 plays an important role in the neural mechanism of shape recognition. V4 neurons exhibit selectivity for the orientation and curvature of boundary fragments. In this paper, we propose a novel neural network model of V4 for shape-based feature extraction and other vision tasks. The low-level layers of the model consist of computational units simulating simple cells and complex cells in the primary visual cortex. These layers extract preliminary visual features including edges and orientations. The V4 computational units calculate the entropy of the extracted features as a measure of visual saliency. The features around salient points are then selected and encoded with a layer of restricted Boltzmann machine to generate an intermediate representation of object shapes. The model is evaluated in shape distinction, feature detection, feature matching, and object discrimination experiments. The results demonstrate that this model generates discriminative local representation of object shapes. It shows a successful attempt to construct a computation model of visual object recognition in the brain.
Restricted Boltzmann machines for collaborative filtering Most of the existing approaches to collaborative filtering cannot handle very large data sets. In this paper we show how a class of two-layer undirected graphical models, called Restricted Boltzmann Machines (RBM's), can be used to model tabular data, such as user's ratings of movies. We present efficient learning and inference procedures for this class of models and demonstrate that RBM's can be successfully applied to the Netflix data set, containing over 100 million user/movie ratings. We also show that RBM's slightly outperform carefully-tuned SVD models. When the predictions of multiple RBM models and multiple SVD models are linearly combined, we achieve an error rate that is well over 6% better than the score of Netflix's own system.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Learning Deep Architectures for AI Theoretical results suggest that in order to learn the kind of com-plicated functions that can represent high-level abstractions (e.g., invision, language, and other AI-level tasks), one may needdeep architec-tures. Deep architectures are composed of multiple levels of non-linearoperations, such as in neural nets with many hidden layers or in com-plicated propositional formulae re-using many sub-formulae. Searchingthe parameter space of deep architectures is a difficult task, but learningalgorithms such as those for Deep Belief Networks have recently beenproposed to tackle this problem with notable success, beating the state-of-the-art in certain areas. This monograph discusses the motivationsand principles regarding learning algorithms for deep architectures, inparticular those exploiting as building blocks unsupervised learning ofsingle-layer models such as Restricted Boltzmann Machines, used toconstruct deeper models such as Deep Belief Networks.
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days.
Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented.
Indexing By Latent Semantic Analysis
Synchronized Disk Interleaving A group of disks may be interleaved to speed up data transfers in a manner analogous to the speedup achieved by main memory interleaving. Conventional disks may be used for interleaving by spreading data across disks and by treating multiple disks as if they were a single one. Furthermore, the rotation of the interleaved disks may be synchronized to simplify control and also to optimize performance. In addition, check- sums may be placed on separate check-sum disks in order to improve reliability. In this paper, we study synchronized disk interleaving as a high-performance mass storage system architecture. The advantages and limitations of the proposed disk interleaving scheme are analyzed using the M/G/1 queueing model and compared to the conventional disk access mechanism.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.2
0.005263
0.000615
0.000098
0
0
0
0
0
0
0
0
0
0
Agent Learning In Relational Domains Based On Logical Mdps With Negation In this paper, we propose a model named Logical Markov Decision Processes with Negation for Relational Reinforcement Learning for applying Reinforcement Learning algorithms on the relational domains with the states and actions in relational form. In the model, the logical negation is represented explicitly, so that the abstract state space can be constructed from the goal state(s) of a given task simply by applying a generating method and an expanding method, and each ground state can be represented by one and only one abstract state. Prototype action is also introduced into the model, so that the applicable abstract actions can be obtained automatically. Based on the model, a model-free Theta(lambda)-learning algorithm is implemented to evaluate the state-action-substitution value function. We also propose a state refinement method guided by two formal definitions of self-loop degree and common characteristic of abstract states to construct the abstract state space automatically by the agent itself rather than manually. The experiments show that the agent can catch the core of the given task, and the final state space is intuitive.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On Computing Solutions to Belief Change Scenarios Belief change scenarios were recently introduced as a framework for expressing different forms of belief change. In this paper, we show how belief revision and belief contraction (within belief change scenarios) can be axiomatised by means of quantified Boolean formulas. This approach has several benefits. First, it furnishes an axiomatic specification of belief change within belief change scenarios. Second, this axiomatisation allows us to identify upper bounds for the complexity of revision and contraction within belief change scenarios.We strengthen these upper bounds by providing strict complexity results for the considered reasoning tasks. Finally, we obtain an implementation of different forms. of belief change by appeal to the existing system QUIP.
On deciding subsumption problems Subsumption is an important redundancy elimination method in automated deduction. A clause D is subsumed by a set $$\\mathcal{C}$$ of clauses if there is a clause C ¿ $$\\mathcal{C}$$ and a substitution ¿ such that the literals of C¿ are included in D. In the field of automated model building, subsumption has been modified to an even stronger redundancy elimination method, namely the so-called clausal H-subsumption. Atomic H-subsumption emerges from clausal H-subsumption by restricting D to an atom and $$\\mathcal{C}$$ to a set of atoms. Both clausal and atomic H-subsumption play an indispensable key role in automated model building. Moreover, problems equivalent to atomic H-subsumption have been studied with different terminologies in many areas of computer science. Both clausal and atomic H-subsumption are known to be intractable, i.e., ¿ p 2 -complete and NP-complete, respectively. In this paper, we present a new approach to deciding (clausal and atomic) H-subsumption that is based on a reduction to QSAT2 and SAT, respectively.
An Efficient Unification Algorithm
Modal Nonmonotonic Logics Revisited: Efficient Encodings for the Basic Reasoning Tasks Modal nonmonotonic logics constitute a well-known family of knowledge-representation formalisms capturing ideally rational agents reasoning about their own beliefs. Although these formalisms are extensively studied from a theoretical point of view, most of these approaches lack generally available solvers thus far. In this paper, we show how variants of Moore's autoepistemic logic can be axiomatised by means of quantified Boolean formulas (QBFs). More specifically, we provide polynomial reductions of the basic reasoning tasks associated with these logics into the evaluation problem of QBFs. Since there are now efficient QBF-solvers, this reduction technique yields a practicably relevant approach to build prototype reasoning systems for these formalisms. We incorporated our encodings within the system QUIP and tested their performance on a class of benchmark problems using different underlying QBF-solvers.
QUBE: A System for Deciding Quantified Boolean Formulas Satisfiability
Planning as satisfiability
Optimizing a BDD-Based Modal Solver In an earlier work we showed how a competitive satisfiability solver for the modal logic K can be built on top of a BDD package. In this work we study optimization issues for such solvers. We focus on two types of optimizations. First we study variable ordering, which is known to be of critical importance to BDD-based algorithms. Second, we study modal extensions of the pure-literal rule. Our results show that the payoff of the variable-ordering optimization is rather modest, while the payoff of the pure-literal optimization is quite significant. We benchmark our optimized solver against both native solvers (DLP) and translation-based solvers (MSPASS and SEMPROP). Our results indicate that the BDD-based approach dominates for modally heavy formulas, while search-based approaches dominate for propositionally-heavy formulas.
Symbolic Decision Procedures for QBF Much recent work has gone into adapting techniques that were originally developed for SAT solving to QBF solving. In particular, QBF solvers are often based on SAT solvers. Most competitive QBF solvers are search-based. In this work we explore an alternative approach to QBF solving, based on symbolic quantifier elimination. We extend some recent symbolic approaches for SAT solving to symbolic QBF solving, using various decision-diagram formalisms such as OBDDs and ZDDs. In both approaches, QBF formulas are solved by eliminating all their quantifiers. Our first solver, QMRES, maintains a set of clauses represented by a ZDD and eliminates quantifiers via multi-resolution. Our second solver, QBDD, maintains a set of OBDDs, and eliminate quantifier by applying them to the underlying OBDDs. We compare our symbolic solvers to several competitive search-based solvers. We show that QBDD is not competitive, but QMRES compares favorably with search-based solvers on various benchmarks consisting of non-random formulas.
Impediments to Universal preference-based default theories Research on nonmonotonic and default reasoning has identified several important criteria for preferring alternative default inferences. The theories of reasoning based on each of these criteria may uniformly be viewed as theories of rational inference, in which the reasoner selects maximally preferred states of belief. Though researchers have noted some cases of apparent conflict between the preferences supported by different theories, it has been hoped that these special theories of reasoning may be combined into a universal logic of nonmonotonic reasoning. We show that the different categories of preferences conflict more than has been realized, and adapt formal results from social choice theory to prove that every universal theory of default reasoning will violate at least one reasonable principle of rational reasoning. Our results can be interpreted as demonstrating that, within the preferential framework, we cannot expect much improvement on the rigid lexicographic priority mechanisms that have been proposed for conflict resolution.
Tractable planning with state variables by exploiting structural restrictions So far, tractable planning problems reported in the literature have beendefined by syntactical restrictions. To better exploit the inherent structurein problems, however, it is probably necessary to study also structuralrestrictions on the state-transition graph. Such restrictions aretypically computationally hard to test, though, since this graph is ofexponential size. We take an intermediate approach, using a statevariablemodel for planning and restricting the state-transition graph...
Some connections between bounded query classes and non-uniform complexity It is shown that if there is a polynomial-time algorithm that tests k(n)=O(log n) points for membership in a set A by making only k(n)-1 adaptive queries to an oracle set X, then A belongs to NP/poly intersection co-NP/poly (if k(n)=O(1) then A belong to P/poly). In particular, k(n)=O(log n) queries to an NP -complete set (k(n)=O(1) queries to an NP-hard set) are more powerful than k(n)-1 queries, unless the polynomial hierarchy collapses. Similarly, if there is a small circuit that tests k(n) points for membership in A by making only k(n)-1 adaptive queries to a set X, then there is a correspondingly small circuit that decides membership in A without an oracle. An investigation is conducted of the quantitatively stronger assumption that there is a polynomial-time algorithm that tests 2k strings for membership in A by making only k queries to an oracle X, and qualitatively stronger conclusions about the structure of A are derived: A cannot be self-reducible unless A∈P, and A cannot be NP-hard unless P=NP. Similar results hold for counting classes. In addition, relationships between bounded-query computations, lowness, and the p-degrees are investigated
Primal separation algorithms In a recent paper, the authors argued for a re-examination of primal cutting plane algorithms, in which cutting planes are used to enable a feasible solution to the original problem to be improved. This led to the idea of primal separation algorithms, which are similar to standard separation algorithms but tailored to the primal context. In this paper we examine the complexity of primal separation for sev- eral well-known classes of inequalities for various important combinatorial optimization problems. It turns out that, in several important cases, the primal separation problem can be solved more easily than the standard one. This is a strong argument in favour of the primal approach. One of the most striking results is that there is an exact polynomial- time primal separation algorithm for a generalization of the so-called Chvatal comb inequalities for the TSP. No polynomial-time algorithm is known for the standard separation problem.
Case-Based Support for the Design of Dynamic System Requirements Using formal specifications based on varieties of mathematical logic is becoming common in the process of designing and implementing software. Formal methods are usually intended to include all important de- tails of the final system in the specification with the aim of proving that it possesses certain mathematical properties. In large, complex systems, this task requires sophisticated theorem proving, which can be difficult and complicated. Telecommunication systems are large and complex, making detailed formal specification impractical with current technology. However roughly formal "sketches" of the behaviours these services provide can be produced, and these can be very helpful in locating which service might be relevant to a given problem. Our case-based approach uses coarse-grained requirements specification sketches to outline the basic behaviour of the system's functional modules (called services), thereby allowing us to iden- tify, reuse and adapt requirements (from cases stored in a library) to construct new cases. By using cases that have already been tested, integrated and im- plemented, less effort is needed to produce requirements specifications on a large scale. Using a hypothetical telecommunication system as our example, we shall show how comparatively simple logic can be used to capture coarse- grained behaviour and how a case-based approach benefits from this. The in- put from the examples is used both to identify the cases whose behaviour corresponds most closely to the designer's intentions and to adapt and finally verify the proposed solution against the examples.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.051436
0.030467
0.027316
0.021544
0.00897
0.003714
0.000704
0.000139
0.00002
0.000001
0
0
0
0
Simple Ppdb: A Paraphrase Database For Simplification We release the Simple Paraphrase Database, a subset of of the Paraphrase Database (PPDB) adapted for the task of text simplification. We train a supervised model to associate simplification scores with each phrase pair, producing rankings competitive with state-of-the-art lexical simplification models. Our new simplification database contains 4.5 million paraphrase rules, making it the largest available resource for lexical simplification.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Composition of Deep and Spiking Neural Networks for Very Low Bit Rate Speech Coding. Most current very low bit rate VLBR speech coding systems use hidden Markov model HMM based speech recognition and synthesis techniques. This allows transmission of information such as phonemes segment by segment; this decreases the bit rate. However, an encoder based on a phoneme speech recognition may create bursts of segmental errors; these would be further propagated to any suprasegmental such as syllable information coding. Together with the errors of voicing detection in pitch parametrization, HMM-based speech coding leads to speech discontinuities and unnatural speech sound artifacts. In this paper, we propose a novel VLBR speech coding framework based on neural networks NNs for end-to-end speech analysis and synthesis without HMMs. The speech coding framework relies on a phonological subphonetic representation of speech. It is designed as a composition of deep and spiking NNs: a bank of phonological analyzers at the transmitter, and a phonological synthesizer at the receiver. These are both realized as deep NNs, along with a spiking NN as an incremental and robust encoder of syllable boundaries for coding of continuous fundamental frequency F0. A combination of phonological features defines much more sound patterns than phonetic features defined by HMM-based speech coders; this finer analysis/synthesis code contributes to smoother encoded speech. Listeners significantly prefer the NN-based approach due to fewer discontinuities and speech artifacts of the encoded speech. A single forward pass is required during the speech encoding and decoding. The proposed VLBR speech coding operates at a bit rate of approximately 360 bits/s.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
CP-nets: a tool for representing and reasoning with conditional ceteris paribus preference statements Information about user preferences plays a key role in automated decision making. In many domains it is desirable to assess such preferences in a qualitative rather than quantitative way. In this paper, we propose a qualitative graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is often compact and arguably quite natural in many circumstances. We provide a formal semantics for this model, and describe how the structure of the network can be exploited in several inference tasks, such as determining whether one outcome dominates (is preferred to) another, ordering a set outcomes according to the preference relation, and constructing the best outcome subject to available evidence.
The computational complexity of dominance and consistency in CP-Nets We investigate the computational complexity of testing dominance and consistency in CP-nets. Previously, the complexity of dominance has been determined for restricted classes in which the dependency graph of the CP-net is acyclic. However, there are preferences of interest that define cyclic dependency graphs; these are modeled with general CP-nets. In our main results, we show here that both dominance and consistency for general CP-nets are PSPACE-complete. We then consider the concept of strong dominance, dominance equivalence and dominance incomparability, and several notions of optimality, and identify the complexity of the corresponding decision problems. The reductions used in the proofs are from STRIPS planning, and thus reinforce the earlier established connections between both areas.
Logical Preference Representation and Combinatorial Vote We introduce the notion of combinatorial vote, where a group of agents (or voters) is supposed to express preferences and come to a common decision concerning a set of non-independent variables to assign. We study two key issues pertaining to combinatorial vote, namely preference representation and the automated choice of an optimal decision. For each of these issues, we briefly review the state of the art, we try to define the main problems to be solved and identify their computational complexity.
Planning Graphs and Knowledge Compilation. One of the major advances in classical planning has been the development of Graphplan. Graphplan builds a layered structure called the planning graph, and then searches this structure backwards for a plan. Modern SAT and CSP approaches also use the planning graph but replace the regression search by a constrained-directed search. The planning graph uncovers implicit constraints in the problem that reduce the size of the search tree. Such constraints encode lower bounds on the number of time steps required for achieving the goal and account for the huge performance gap between Graphplan and its predecessors. Still, the form of local consistency underlying the construction of the planning graph is not well understood, being described by various authors as a limited form of negative binary resolution, k-consistency, or 2-j consistency. In this paper, we aim to shed light on this issue by showing that the computation of the planning graph corresponds exactly to the iterative computation of prime implicates of size one and two over the logical encoding of the problem with the goals removed. The correspondence between planning graphs and a precise form of knowledge compilation provides a well-founded basis for understanding and developing extensions of the planning graph to non-Strips settings, and suggests novel and effective forms of knowledge compilation in other contexts. We explore some of these extensions in this paper and relate planning graphs with bounded variable elimination algorithms as studied by Rina Dechter and others.
Effective approaches for partial satisfaction (over-subscription) planning In many real world planning scenarios, agents often do not have enough resources to achieve all of their goals. Consequently, they are forced to find plans that satisfy only a subset of the goals. Solving such partial satisfaction planning (PSP) problems poses several challenges, including an increased emphasis on modeling and handling plan quality (in terms of action costs and goal utilities). Despite the ubiquity of such PSP problems, very little attention has been paid to them in the planning community. In this paper, we start by describing a spectrum of PSP problems and focus on one of the more general PSP problems, termed PSP NET BENEFIT. We develop three techniques, (i) one based on integer programming, called OptiPlan, (ii) the second based on regression planning with reachability heuristics, called AltAltps, and (iii) the third based on anytime heuristic search for a forward state-space heuristic planner, called Sapaps. Our empirical studies with these planners show that the heuristic planners generate plans that are comparable to the quality of plans generated by OptiPlan, while incurring only a small fraction of the cost.
BReLS: A System for the Integration of Knowledge Bases The process of integrating knowledge coming from dierent sources has been widely inves- tigated in the literature. Three distinct con- ceptual approaches to this problem have been most succesful: belief revision, merging and update. In this paper we present a framework that in- tegrates these three approaches. In the pro- posed framework all three operations can be performed. We provide an example that can only be solved by applying more than one sin- gle style of knowledge integration and, there- fore, cannot be addressed by anyone of the approaches alone. The framework has been implemented, and the examples shown in this paper (as well as other examples from the belief revision liter- ature) have been successfully tested.
Encoding Planning Problems in Nonmonotonic Logic Programs . We present a framework for encoding planning problemsin logic programs with negation as failure, having computational efficiencyas our major consideration. In order to accomplish our goal, webring together ideas from logic programming and the planning systemsgraphplan and satplan. We discuss different representations of planningproblems in logic programs, point out issues related to their performance,and show ways to exploit the structure of the domains in theserepresentations....
Red-Black Relaxed Plan Heuristics.
Analyzing search topology without running any search: on the connection between causal graphs and h+ The ignoring delete lists relaxation is of paramount importance for both satisficing and optimal planning. In earlier work, it was observed that the optimal relaxation heuristic h+ has amazing qualities in many classical planning benchmarks, in particular pertaining to the complete absence of local minima. The proofs of this are hand-made, raising the question whether such proofs can be lead automatically by domain analysis techniques. In contrast to earlier disappointing results - the analysis method has exponential runtime and succeeds only in two extremely simple benchmark domains - we herein answer this question in the afirmative. We establish connections between causal graph structure and h+ topology. This results in low-order polynomial time analysis methods, implemented in a tool we call TorchLight. Of the 12 domains where the absence of local minima has been proved, TorchLight gives strong success guarantees in 8 domains. Empirically, its analysis exhibits strong performance in a further 2 of these domains, plus in 4 more domains where local minima may exist but are rare. In this way, TorchLight can distinguish "easy" domains from "hard" ones. By summarizing structural reasons for analysis failure, TorchLight also provides diagnostic output indicating domain aspects that may cause local minima.
The Logic of Persistence A recent paper (Hanks19851 examines temporal rea- soning as an example of default reasoning. They conclude that all current systems of default reasoning, including non-monotonic logic, default logic, and circumscription, are inadequate for reasoning about persistence. I present a way of representing persistence in a framework based on a generalization of circumscription, which captures Hanks and McDermott's procedural representation. 1. Persistence
Main memory database systems: an overview Main memory database systems (MMDBs) store their data in main physical memory and provide very high-speed access. Conventional database systems are optimized for the particular characteristics of disk storage mechanisms. Memory resident systems, on the other hand, use different optimizations to structure and organize data, as well as to make it reliable. The authors survey the major memory residence optimizations and briefly discuss some of the MMDBs that have been designed or implemented.
A principle for resilient sharing of distributed resources A technique is described which permits distributed resources to be shared (services to be offered) in a resilient manner. The essence of the technique is to a priori declare one of the server hosts primary and the others backups. Any of the servers can perform the primary duties. Thus the role of primary can migrate around the set of servers. The concept of n-host resiliency is introduced and the error detection and recovery schemes for two-host resiliency are presented. The single primary, multiple backup technique for resource sharing is shown to have minimal delay. In the general case, this is superior to multiple primary techniques.
Undecidability in epistemic planning Dynamic epistemic logic (DEL) provides a very expressive framework for multi-agent planning that can deal with nondeterminism, partial observability, sensing actions, and arbitrary nesting of beliefs about other agents' beliefs. However, as we show in this paper, this expressiveness comes at a price. The planning framework is undecidable, even if we allow only purely epistemic actions (actions that change only beliefs, not ontic facts). Undecidability holds already in the S5 setting with at least 2 agents, and even with 1 agent in S4. It shows that multi-agent planning is robustly undecidable if we assume that agents can reason with an arbitrary nesting of beliefs about beliefs. We also prove a corollary showing undecidability of the DEL model checking problem with the star operator on actions (iteration).
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.037366
0.0416
0.0416
0.018649
0.012537
0.004133
0.001162
0.000256
0.000088
0.000006
0
0
0
0
Heterogeneous domain adaptation using previously learned classifier for object detection problem When a trained classifier on specific domain (source domain) is applied in a different domain (target domain) the accuracy is degraded significantly. The main reason for this degradation is the distribution difference between the source and target domains. Domain adaptation aims to lessen this accuracy degradation. In this paper, we focus on adaptation for heterogeneous domains (where the source and target domain may have different feature spaces) and propose a novel algorithm which uses the pre-learned source classifier to adapt a trained target classifier. In this method, a max-margin classifier is trained on the target data and is adapted using the offset of the source classifier. The main strength of this adaptation is its low complexity and high speed which makes it a proper adaptation choice for problems with large-size datasets such as object detection. We test our method on human detection datasets and the experimental results show the significant improvement in accuracy, in comparison to several baselines.
A SVM-based model-transferring method for heterogeneous domain adaptation. In many real classification scenarios the distribution of test (target) domain is different from the training (source) domain. The distribution shift between the source and target domains may cause the source classifier not to gain the expected accuracy on the target data. Domain adaptation has been introduced to solve the accuracy-dropping problem caused by distribution shift phenomenon between domains. In this paper, we study model-transferring methods as a practical branch of adaptation methods, which adapt the source classifier to new domains without using the source samples. We introduce a new SVM-based model-transferring method, in which a max-margin classifier is trained on labeled target samples and is adapted using the offset of the source classifier. We call it Heterogeneous Max-Margin Classifier Adaptation Method, abbreviated as HMCA. The main strength of HMCA is its applicability for heterogeneous domains where the source and target domains may have different feature types. This property is important because the previously proposed model-transferring methods do not provide any solution for heterogeneous problems. We also introduce a new similarity metric that reliably measures adaptability between two domains according to HMCA structure. In the situation that we have access to several source classifiers, the metric can be used to select the most appropriate one for adaptation. We test HMCA on two different computer vision problems (pedestrian detection and image classification). The experimental results show the advantage in accuracy rate for our approach in comparison to several baselines. We propose a new SVM-based model-transferring method for adaptation.Our method applies adaptation in the one-dimensional discrimination space.The proposed method can handle heterogeneous domains.Based on proposed model-transferring method, we design a new metric for measuring the adaptability between two domains.
Domain Adaptive Classification We propose an unsupervised domain adaptation method that exploits intrinsic compact structures of categories across different domains using binary attributes. Our method directly optimizes for classification in the target domain. The key insight is finding attributes that are discriminative across categories and predictable across domains. We achieve a performance that significantly exceeds the state-of-the-art results on standard benchmarks. In fact, in many cases, our method reaches the same-domain performance, the upper bound, in unsupervised domain adaptation scenarios.
Histogram Of Template For Human Detection In this paper, we propose a novel feature named histogram of template (HOT) for human detection in still images. For every pixel of an image, various templates are defined, each of which contains the pixel itself and two of its neighboring pixels. If the intensity and gradient values of the three pixels satisfy a pre-defined function, the central pixel is regarded to meet the corresponding template for this function. Histograms of pixels meeting various templates are calculated for a set of functions, and combined to be the feature for detection. Compared to the other features, the proposed feature takes intensity as well as the gradient information into consideration. Besides, it reflects the relationship between 3 pixels, instead of focusing on only one. Experiments for human detection are performed on INRIA dataset, which shows the proposed HOT feature is more discriminative than histogram of orientated gradient (HOG) feature, under the same training method.
Dual Fuzzy-Possibilistic Co-clustering for Document Categorization In this paper, we introduce a new algorithm called Dual Fuzzy-possibilistic Co-clustering (DFPC) for docu- ment categorization. The proposed algorithm offers several advantages. Firstly, the combined fuzzy and possibilistic cluster memberships in DFPC can provide realistic repre- sentation of document clusters. Secondly, as a co-clustering algorithm, DFPC can categorize high-dimensional datasets effectively. Thirdly, the possibilistic clustering element of the algorithm makes it robust to outliers. We detail the for- mulation of DFPC, and empirically demonstrate its effec- tiveness in categorizing benchmark document datasets.
Composite sketch recognition via deep network - a transfer learning approach Sketch recognition is one of the integral components used by law enforcement agencies in solving crime. In recent past, software generated composite sketches are being preferred as they are more consistent and faster to construct than hand drawn sketches. Matching these composite sketches to face photographs is a complex task because the composite sketches are drawn based on the witness description and lack minute details which are present in photographs. This paper presents a novel algorithm for matching composite sketches with photographs using transfer learning with deep learning representation. In the proposed algorithm, first the deep learning architecture based facial representation is learned using large face database of photos and then the representation is updated using small problem-specific training database. Experiments are performed on the extended PRIP database and it is observed that the proposed algorithm outperforms recently proposed approach and a commercial face recognition system.
Handwritten Hangul recognition using deep convolutional neural networks In spite of the advances in recognition technology, handwritten Hangul recognition (HHR) remains largely unsolved due to the presence of many confusing characters and excessive cursiveness in Hangul handwritings. Even the best existing recognizers do not lead to satisfactory performance for practical applications and have much lower performance than those developed for Chinese or alphanumeric characters. To improve the performance of HHR, here we developed a new type of recognizers based on deep neural networks (DNNs). DNN has recently shown excellent performance in many pattern recognition and machine learning problems, but have not been attempted for HHR. We built our Hangul recognizers based on deep convolutional neural networks and proposed several novel techniques to improve the performance and training speed of the networks. We systematically evaluated the performance of our recognizers on two public Hangul image databases, SERI95a and PE92. Using our framework, we achieved a recognition rate of 95.96 % on SERI95a and 92.92 % on PE92. Compared with the previous best records of 93.71 % on SERI95a and 87.70 % on PE92, our results yielded improvements of 2.25 and 5.22 %, respectively. These improvements lead to error reduction rates of 35.71 % on SERI95a and 42.44 % on PE92, relative to the previous lowest error rates. Such improvement fills a significant portion of the large gap between practical requirement and the actual performance of Hangul recognizers.
Learning Deep Architectures for AI Theoretical results suggest that in order to learn the kind of com-plicated functions that can represent high-level abstractions (e.g., invision, language, and other AI-level tasks), one may needdeep architec-tures. Deep architectures are composed of multiple levels of non-linearoperations, such as in neural nets with many hidden layers or in com-plicated propositional formulae re-using many sub-formulae. Searchingthe parameter space of deep architectures is a difficult task, but learningalgorithms such as those for Deep Belief Networks have recently beenproposed to tackle this problem with notable success, beating the state-of-the-art in certain areas. This monograph discusses the motivationsand principles regarding learning algorithms for deep architectures, inparticular those exploiting as building blocks unsupervised learning ofsingle-layer models such as Restricted Boltzmann Machines, used toconstruct deeper models such as Deep Belief Networks.
Learning eigenfunctions links spectral embedding and kernel PCA. In this letter, we show a direct relation between spectral embedding methods and kernel principal components analysis and how both are special cases of a more general learning problem: learning the principal eigenfunctions of an operator defined from a kernel and the unknown data-generating density. Whereas spectral embedding methods provided only coordinates for the training points, the analysis justifies a simple extension to out-of-sample examples (the Nyström formula) for multidimensional scaling (MDS), spectral clustering, Laplacian eigenmaps, locally linear embedding (LLE), and Isomap. The analysis provides, for all such spectral embedding methods, the definition of a loss function, whose empirical average is minimized by the traditional algorithms. The asymptotic expected value of that loss defines a generalization performance and clarifies what these algorithms are trying to learn. Experiments with LLE, Isomap, spectral clustering, and MDS show that this out-of-sample embedding formula generalizes well, with a level of error comparable to the effect of small perturbations of the training set on the embedding.
Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented.
Probability one separation of the Boolean hierarchy
Microprocessor technology trends The rapid pace of advancement of microprocessor technology has shown no sign of diminishing, and this pace is expected to continue in the future. Recent trends in such areas as silicon technology, processor architecture and implementation, system organization, buses, higher levels of integration, self-testing, caches, coprocessors, and fault tolerance are discussed, and expectations for further ad...
Introduction: progress in formal commonsense reasoning This special issue consists largely of expanded and revised versions of selected papers of the Fifth International Symposium on Logical Formalizations of Commonsense Reasoning (Common Sense 2001), held at New York University in May 2001., The Common Sense Symposia, first organized in 1991 by John McCarthy and held roughly biannually since, are dedicated to exploring the development of formal commonsense theories using mathematical logic. Commonsense reasoning is a central part of human behavior; no real intelligence is possible without it. Thus, the development of systems that exhibit commonsense behavior is a central goal of Artificial Intelligence. It has proven to be more difficult to create systems that are capable of commonsense reasoning than systems that can solve "hard" reasoning problems. There are chess-playing programs that beat champions [5] and expert systems that assist in clinical diagnosis [32], but no programs that reason about how far one must bend over to put on one's socks. Part of the difficulty is the all-encompassing aspect of commonsense reasoning: any problem one looks at touches on many different types of knowledge. Moreover, in contrast to expert knowledge which is usually explicit, most commonsense knowledge is implicit. One of the prerequisites to developing commonsense reasoning systems is making this knowledge explicit. John McCarthy [25] first noted this need and suggested using formal logic to encode commonsense knowledge and reasoning. In the ensuing decades, there has been much research on the representation of knowledge in formal logic and on inference algorithms to
Internet information triangulation: Design theory and prototype evaluation AbstractMany discussions exist regarding the credibility of information on the Internet. Similar discussions happen on the interpretation of social scientific research data, for which information triangulation has been proposed as a useful method. In this article, we explore a design theory-consisting of a kernel theory, meta-requirements, and meta-designs-for software and services that triangulate Internet information. The kernel theory identifies 5 triangulation methods based on Churchman's inquiring systems theory and related meta-requirements. These meta-requirements are used to search for existing software and services that contain design features for Internet information triangulation tools. We discuss a prototyping study of the use of an information triangulator among 72 college students and how their use contributes to their opinion formation. From these findings, we conclude that triangulation tools can contribute to opinion formation by information consumers, especially when the tool is not a mere fact checker but includes the search and delivery of alternative views. Finally, we discuss other empirical propositions and design propositions for an agenda for triangulator developers and researchers. In particular, we propose investment in theory triangulation, that is, tools to automatically detect ethically and theoretically alternative information and views.
1.071678
0.0705
0.048541
0.039478
0.026822
0.003333
0.000263
0.000016
0.000002
0
0
0
0
0
Cross-domain video concept detection using adaptive svms Many multimedia applications can benefit from techniques for adapting existing classifiers to data with different distributions. One example is cross-domain video concept detection which aims to adapt concept classifiers across various video domains. In this paper, we explore two key problems for classifier adaptation: (1) how to transform existing classifier(s) into an effective classifier for a new dataset that only has a limited number of labeled examples, and (2) how to select the best existing classifier(s) for adaptation. For the first problem, we propose Adaptive Support Vector Machines (A-SVMs) as a general method to adapt one or more existing classifiers of any type to the new dataset. It aims to learn the "delta function" between the original and adapted classifier using an objective function similar to SVMs. For the second problem, we estimate the performance of each existing classifier on the sparsely-labeled new dataset by analyzing its score distribution and other meta features, and select the classifiers with the best estimated performance. The proposed method outperforms several baseline and competing methods in terms of classification accuracy and efficiency in cross-domain concept detection in the TRECVID corpus.
Histogram Of Template For Human Detection In this paper, we propose a novel feature named histogram of template (HOT) for human detection in still images. For every pixel of an image, various templates are defined, each of which contains the pixel itself and two of its neighboring pixels. If the intensity and gradient values of the three pixels satisfy a pre-defined function, the central pixel is regarded to meet the corresponding template for this function. Histograms of pixels meeting various templates are calculated for a set of functions, and combined to be the feature for detection. Compared to the other features, the proposed feature takes intensity as well as the gradient information into consideration. Besides, it reflects the relationship between 3 pixels, instead of focusing on only one. Experiments for human detection are performed on INRIA dataset, which shows the proposed HOT feature is more discriminative than histogram of orientated gradient (HOG) feature, under the same training method.
Dual Fuzzy-Possibilistic Co-clustering for Document Categorization In this paper, we introduce a new algorithm called Dual Fuzzy-possibilistic Co-clustering (DFPC) for docu- ment categorization. The proposed algorithm offers several advantages. Firstly, the combined fuzzy and possibilistic cluster memberships in DFPC can provide realistic repre- sentation of document clusters. Secondly, as a co-clustering algorithm, DFPC can categorize high-dimensional datasets effectively. Thirdly, the possibilistic clustering element of the algorithm makes it robust to outliers. We detail the for- mulation of DFPC, and empirically demonstrate its effec- tiveness in categorizing benchmark document datasets.
Adapting SVM Classifiers to Data with Shifted Distributions Many data mining applications can benefit from adapt- ing existing classifiers to new data with shifted distribu- tions. In this paper, we present Adaptive Support Vector Machine (Adapt-SVM) as an efficient model for adapting a SVM classifier trained from one dataset to a new dataset where only limited labeled examples are available. By in- troducing a new regularizer into SVM's objective function, Adapt-SVM aims to minimize both the classification error over the training examples, and the discrepancy between the adapted and original classifier. We also propose a selective sampling strategy based on the loss minimization principle to seed the most informative examples for classifier adap- tation. Experiments on an artificial classification task and on a benchmark video classification task shows that Adapt- SVM outperforms several baseline methods in terms of ac- curacy and/or efficiency.
Efficient Learning of Domain-invariant Image Representations
Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach.
DeepFace: Closing the Gap to Human-Level Performance in Face Verification In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27%, closely approaching human-level performance.
Learning continuous attractors in recurrent networks One approach to invariant object recognition employs a recurrent neuralnetwork as an associative memory. In the standard depiction of thenetwork's state space, memories of objects are stored as attractive fixedpoints of the dynamics. I argue for a modification of this picture: if anobject has a continuous family of instantiations, it should be representedby a continuous attractor. This idea is illustrated with a network thatlearns to complete patterns. To perform the task of filling in...
An empirical evaluation of deep architectures on problems with many factors of variation Recently, several learning algorithms relying on models with deep architectures have been proposed. Though they have demonstrated impressive performance, to date, they have only been evaluated on relatively simple problems such as digit recognition in a controlled environment, for which many machine learning algorithms already report reasonable results. Here, we present a series of experiments which indicate that these models show promise in solving harder learning problems that exhibit many factors of variation. These models are compared with well-established algorithms such as Support Vector Machines and single hidden-layer feed-forward neural networks.
Learning deep structured semantic models for web search using clickthrough data Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper.
EXPLODE: a lightweight, general system for finding serious storage system errors Storage systems such as file systems, databases, and RAID systems have a simple, basic contract: you give them data, they do not lose or corrupt it. Often they store the only copy, making its irrevocable loss almost arbitrarily bad. Unfortunately, their code is exceptionally hard to get right, since it must correctly recover from any crash at any program point, no matter how their state was smeared across volatile and persistent memory. This paper describes EXPLODE, a system that makes it easy to systematically check real storage systems for errors. It takes user-written, potentially system-specific checkers and uses them to drive a storage system into tricky corner cases, including crash recovery errors. EXPLODE uses a novel adaptation of ideas from model checking, a comprehensive, heavy-weight formal verification technique, that makes its checking more systematic (and hopefully more effective) than a pure testing approach while being just as lightweight. EXPLODE is effective. It found serious bugs in a broad range of real storage systems (without requiring source code): three version control systems, Berkeley DB, an NFS implementation, ten file systems, a RAID system, and the popular VMware GSX virtual machine. We found bugs in every system we checked, 36 bugs in total, typically with little effort.
On the road to recovery: restoring data after disasters Restoring data operations after a disaster is a daunting task: how should recovery be performed to minimize data loss and application downtime? Administrators are under considerable pressure to recover quickly, so they lack time to make good scheduling decisions. They schedule recovery based on rules of thumb, or on pre-determined orders that might not be best for the failure occurrence. With multiple workloads and recovery techniques, the number of possibilities is large, so the decision process is not trivial.This paper makes several contributions to the area of data recovery scheduling. First, we formalize the description of potential recovery processes by defining recovery graphs. Recovery graphs explicitly capture alternative approaches for recovering workloads, including their recovery tasks, operational states, timing information and precedence relationships. Second, we formulate the data recovery scheduling problem as an optimization problem, where the goal is to find the schedule that minimizes the financial penalties due to downtime, data loss and vulnerability to subsequent failures. Third, we present several methods for finding optimal or near-optimal solutions, including priority-based, randomized and genetic algorithm-guided ad hoc heuristics. We quantitatively evaluate these methods using realistic storage system designs and workloads, and compare the quality of the algorithms' solutions to optimal solutions provided by a math programming formulation and to the solutions from a simple heuristic that emulates the choices made by human administrators. We find that our heuristics' solutions improve on the administrator heuristic's solutions, often approaching or achieving optimality.
Saving queries with randomness In this paper, we investigate the power of randomness to save a query to an NP-complete set. We show that the P SAT ∥ [ k ] ≤ p m -complete language randomly reduces to a language in P SAT ∥ [ k − 1] with a one-sided error probability of 1/⌈ k /2⌉ or a two sided-error probability of 1/( k +1). Furthermore, we prove that these probability bounds are tight; i.e., they cannot be improved by 1/poly, unless PH collapses. We also obtain tight performance bounds for randomized reductions between nearby classes in the Boolean and bounded query hierarchies. These bounds provide probability thresholds for completeness under randomized reductions in these classes. Using these thresholds, we show that certain languages in the Boolean hierarchy which are not ≤ p m -complete in some relativized worlds, nevertheless inherit many of the hardness properties associated with the ≤ p m -complete languages. Finally, we explore the relationship between randomization and functions that are computable using bounded queries to SAT. For any function h ( n ) = O (log n ), we show that there is a function f computable using h ( n ) nonadaptive queries to SAT, which cannot be computed correctly with probability 1/2 + 1/poly by any randomized machine which makes less than h ( n ) adaptive queries to any oracle, unless PH collapses.
Pedestrian Detection With Deep Convolutional Neural Network The problem of pedestrian detection in image and video frames has been extensively investigated in the past decade. However, the low performance in complex scenes shows that it remains an open problem. In this paper, we propose to cascade simple Aggregated Channel Features (ACF) and rich Deep Convolutional Neural Network (DCNN) features for efficient and effective pedestrian detection in complex scenes. The ACF based detector is used to generate candidate pedestrian windows and the rich DCNN features are used for fine classification. Experiments show that the proposed approach achieved leading performance in the INRIA dataset and comparable performance to the state-of-the-art in the Caltech and ETH datasets.
1.03727
0.040151
0.039382
0.031389
0.02057
0.000792
0.000044
0.000007
0.000002
0
0
0
0
0
A Reconfigurable Virtual Storage Device This research is motivated by the demands of personalized storage devices that fit the quality-of-service needs of each individual user. In particular, the design of a virtual storage device, that consists of multiple heterogeneous storage devices, is proposed with the capability in dynamic reconfiguration to fit the needs of users. A dynamic remapping mechanism is presented with a heap-based data structure to manage the moving of data among component storage devices for performance optimization. A lazy swapping method is then proposed to reduce the mapping overheads. The capability of the proposed design is evaluated with a prototype virtual storage device of a flash drive and a hard drive over an ex2 file system and realistic/generated workloads.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Generalization of Linear Cyclic Pursuit With Application to Rendezvous of Multiple Autonomous Agents Cyclic pursuit is a simple distributed control law in which agent i pursues agent i+1 modulo n. We generalize existing results and show that by selecting the gains of the agents, the point of convergence of these agents can be controlled. The condition for convergence, the range of controller gains and the reachable set where convergence can occur are studied. It is also shown that the sequence in which an agent pursues another does not affect the point of convergence
Consensus and Cooperation in Networked Multi-Agent Systems? This paper provides a theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow, robustness to changes in network topology due to link/node failures, time-delays, and performance guarantees. An overview of basic concepts of information consensus in networks and methods of convergence and performance analys...
Novel Vehicular Trajectories for Collective Motion from Coupled Oscillator Steering Control We consider a model for vehicle motion coordination for three vehicles that uses coupled oscillator steering control. Prior work on such models has focused primarily on sinusoidal coupling functions, which typically give behavior in which individual vehicles move either in straight lines or in circles. We show that other, more exotic trajectories are possible when more general coupling functions are considered. Such trajectories are associated with periodic orbits in the steering control subsystem. The proximity of these periodic orbits to heteroclinic bifurcations allows for a detailed characterization of the properties of the vehicular trajectories.
Collective behavior with heterogeneous controllers In this paper, we study the collective motion of individually controlled planar particles when they are coupled through heterogeneous controller gains. Two types of collective formations, synchronization and balancing, are described and analyzed under the influence of these heterogeneous controller gains. These formations are characterized by the motion of the centroid of the group of particles. In synchronized formation, the particles and their centroid move in a common direction, while in balanced formation the movement of particles possess a fixed location of the centroid. We show that, by selecting suitable controller gains, these formations can be controlled significantly to obtain not only a desired direction of motion but also a desired location of the centroid. We present the results for N-particles in synchronized formation, while in balanced formation our analysis is confined to two and three particles.
Collective motion, sensor networks and ocean sampling This paper addresses the design of mobile sensor networks for optimal data collection. The development is strongly motivated by the application to adaptive ocean sampling for an autonomous ocean observing and prediction system. A performance metric, used to derive optimal paths for the network of mobile sensors, defines the optimal data set as one which minimizes error in a model estimate of the s...
The nature of statistical learning theory~. First Page of the Article
TCP Nice: a mechanism for background transfers Many distributed applications can make use of large background transfers--transfers of data that humans are not waiting for--to improve availability, reliability, latency or consistency. However, given the rapid fluctuations of available network bandwidth and changing resource costs due to technology trends, hand tuning the aggressiveness of background transfers risks (1) complicating applications, (2) being too aggressive and interfering with other applications, and (3) being too timid and not gaining the benefits of background transfers. Our goal is for the operating system to manage network resources in order to provide a simple abstraction of near zero-cost background transfers. Our system, TCP Nice, can provably bound the interference inflicted by background flows on foreground flows in a restricted network model. And our microbenchmarks and case study applications suggest that in practice it interferes little with foreground flows, reaps a large fraction of spare network bandwidth, and simplifies application construction and deployment. For example, in our prefetching case study application, aggressive prefetching improves demand performance by a factor of three when Nice manages resources; but the same prefetching hurts demand performance by a factor of six under standard network congestion control.
Internet of Things (IoT): A vision, architectural elements, and future directions Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
The complexity of combinatorial problems with succinct input representation Several languages for the succinct representation of the instances of combinatorial problems are investigated. These languages have been introduced in [20, 2] and [5] where it has been shown that describing the instances by these languages causes a blow-up of the complexities of some problems. In the present paper the descriptional power of these languages is compared by estimating the complexities of some combinatorial problems in terms of completeness in suitable classes of the “counting polynomial-time hierarchy” which is introduced here. It turns out that some of the languages are not comparable, unless P=NP Some problems left open in [2] are solved.
Recognizing frozen variables in constraint satisfaction problems In constraint satisfaction problems over finite domains, some variables can be frozen, that is, they take the same value in all possible solutions. We study the complexity of the problem of recognizing frozen variables with restricted sets of constraint relations allowed in the instances. We show that the complexity of such problems is determined by certain algebraic properties of these relations. Under the assumption that NP ≠ coNP (and consequently PTIME ≠ NP), we characterize all tractable problems, and describe large classes of NP-complete, coNP-complete, and DP-complete problems. As an application of these results, we completely classify the complexity of the problem in two cases: (1) with domain size 2; and (2) when all unary relations are present. We also give a rough classification for domain size 3.
Simultaneous Pipelining in QPipe: Exploiting Work Sharing Opportunities Across Queries Data warehousing and scientific database applications operate on massive datasets and are characterized by complex queries accessing large portions of the database. Concurrent queries often exhibit high data and computation overlap, e.g., they access the same relations on disk, compute similar aggregates, or share intermediate results. Unfortunately, run-time sharing in modern database engines is limited by the paradigm of invoking an independent set of operator instances per query, potentially missing sharing opportunities if the buffer pool evicts data early.
Reasoning About Actions in Narrative Understanding Reasoning about actions has been a focus of interest in AI from the beginning and continues to receive attention. Rut the range of situations considered has been rather narrow and falls well short of what is needed for understanding natural language. Language understanding requires sophisticated reasoning about actions and events and the world's languages employ a variety of grammatical and lexical devices to construe, direct attention and focus on, and control inferences about actions and events. We implemented a neurally inspired computational model that is able to reason about, linguistic action and event descriptions, such as those found in news stories. The system uses an active. event representation that also seems to provide natural and cognitiveIy motivated solutions to classical problems in logical theories of reasoning about actions. For logical approaches to reasoning about actions, we suggest, that looking at story understanding sets up fairly strong desiderata both in terms of the fine-grained event and action distinctions and the kinds of real-time inferences required.
Representing the Process Semantics in the Event Calculus In this paper we shall present a translation of the process semantics [5] to the event calculus. The aim is to realize a method of integrating high-level semantics with logical calculi to reason about continuous change. The general translation rules and the soundness and completeness theorem of the event calculus with respect to the process semantics are main technical results of this paper.
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
1.053141
0.055062
0.055062
0.039375
0.018925
0
0
0
0
0
0
0
0
0
CB3: An Adaptive Error Function for Backpropagation Training Effective backpropagation training of multi-layer perceptrons depends on the incorporation of an appropriate error or objective function. Classification-Based (CB) error functions are heuristic approaches that attempt to guide the network directly to correct pattern classification rather than using common error minimization heuristics, such as Sum-Squared Error (SSE) and Cross-Entropy (CE), which do not explicitly minimize classification error. This work presents CB3, a novel CB approach that learns the error function to be used while training. This is accomplished by learning pattern confidence margins during training, which are used to dynamically set output target values for each training pattern. On 11 applications, CB3 significantly outperforms previous CB error functions, and also reduces average test error over conventional error metrics using 0---1 targets without weight decay by 1.8%, and by 1.3% over metrics with weight decay. The CB3 also exhibits lower model variance and tighter mean confidence interval.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Differentiable Sparse Coding Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a laplacian (L1) that promotes sparsity. We show how smoother priors can pre- serve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Additionally, we show how to calculate the derivative of the MAP estimate effi- ciently with implicit differentiation. One prior that can be differentiated this way is KL-regularization. We demonstrate its effectiveness on a wide variety of appli- cations, and find that online optimization of the parameters of the KL-regularized model can significantly improve prediction performance.
Rational Kernels: Theory and Algorithms Many classification algorithms were originally designed for fixed-size vectors. Recent applications in text and speech processing and computational biology require however the analysis of variable-length sequences and more generally weighted automata. An approach widely used in statistical learning techniques such as Support Vector Machines (SVMs) is that of kernel methods, due to their computational efficiency in high-dimensional feature spaces. We introduce a general family of kernels based on weighted transducers or rational relations, rational kernels , that extend kernel methods to the analysis of variable-length sequences or more generally weighted automata. We show that rational kernels can be computed efficiently using a general algorithm of composition of weighted transducers and a general single-source shortest-distance algorithm. Not all rational kernels are positive definite and symmetric (PDS), or equivalently verify the Mercer condition, a condition that guarantees the convergence of training for discriminant classification algorithms such as SVMs. We present several theoretical results related to PDS rational kernels. We show that under some general conditions these kernels are closed under sum, product, or Kleene-closure and give a general method for constructing a PDS rational kernel from an arbitrary transducer defined on some non-idempotent semirings. We give the proof of several characterization results that can be used to guide the design of PDS rational kernels. We also show that some commonly used string kernels or similarity measures such as the edit-distance, the convolution kernels of Haussler, and some string kernels used in the context of computational biology are specific instances of rational kernels. Our results include the proof that the edit-distance over a non-trivial alphabet is not negative definite, which, to the best of our knowledge, was never stated or proved before. Rational kernels can be combined with SVMs to form efficient and powerful techniques for a variety of classification tasks in text and speech processing, or computational biology. We describe examples of general families of PDS rational kernels that are useful in many of these applications and report the result of our experiments illustrating the use of rational kernels in several difficult large-vocabulary spoken-dialog classification tasks based on deployed spoken-dialog systems. Our results show that rational kernels are easy to design and implement and lead to substantial improvements of the classification accuracy.
An Information Measure For Classification
Self Supervised Boosting Boosting algorithms and successful applications thereof abound for clas- sification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a ran- dom field model by training them to improve classification performance between the data and an equal-sized sample of "negative examples" gen- erated from the model's current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative examples from the model's current Boltzmann distribution. Next, a fea- ture is trained to improve classification performance between data and negative examples. Finally, a coefficient is learned which determines the importance of this feature relative to ones already in the pool. Negative examples only need to be generated once to learn each new feature. The validity of the approach is demonstrated on binary digits and continuous synthetic data.
A two-layer ICA-like model estimated by score matching Capturing regularities in high-dimensional data is an important problem in machine learning and signal processing. Here we present a statistical model that learns a nonlinear representation from the data that reflects abstract, invariant properties of the signal without making requirements about the kind of signal that can be processed. The model has a hierarchy of two layers, with the first layer broadly corresponding to Independent Component Analysis (ICA) and a second layer to represent higher order structure. We estimate the model using the mathematical framework of Score Matching (SM), a novel method for the estimation of non-normalized statistical models. The model incorporates a squaring nonlinearity, which we propose to be suitable for forming a higher-order code of invariances. Additionally the squaring can be viewed as modelling subspaces to capture residual dependencies, which linear models cannot capture.
Nonlocal estimation of manifold structure. We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation invites an exploration of nonlocal manifold learning algorithms that attempt to discover shared structure in the tangent planes at different positions. A training criterion for such an algorithm is proposed, and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to generalize very far from training data (on learning handwritten character image rotations), where local nonparametric methods fail.
Efficient Pattern Recognition Using a New Transformation Distance Memory-based classification algorithms such as radial basis func(cid:173) tions or K-nearest neighbors typically rely on simple distances (Eu(cid:173) clidean, dot product ... ), which are not particularly meaningful on pattern vectors. More complex, better suited distance measures are often expensive and rather ad-hoc (elastic matching, deformable templates). We propose a new distance measure which (a) can be made locally invariant to any set of transformations of the input and (b) can be computed efficiently. We tested the method on large handwritten character databases provided by the Post Office and the NIST. Using invariances with respect to translation, rota(cid:173) tion, scaling, shearing and line thickness, the method consistently outperformed all other systems tested on the same databases.
Segmentation using eigenvectors: a unifying view Automatic grouping and segmentation of images remains a challenging problem in computer vision. Recently, a number of authors have demonstrated good performance on this task using methods that are based on eigenvectors of the affinity matrix. These approaches are extremely attractive in that they are based on simple eigen-decomposition algorithms whose stability is well understood. Nevertheless, the use of eigen-decompositions in the context of segmentation is far from well understood.In this paper we give a unified treatment of these algorithms, and show the close connections between them while highlighting their distinguishing features. We then prove results on eigenvectors of block matrices that allow us to analyze the performance of these algorithms in simple grouping settings. Finally, we use our analysis to motivate a variation on the existing methods that combines aspects from different eigenvector segmentation algorithms. We illustrate our analysis with results on real and synthetic images.
Classification using discriminative restricted Boltzmann machines Recently, many applications for Restricted Boltzmann Machines (RBMs) have been developed for a large variety of learning problems. However, RBMs are usually used as feature extractors for another learning algorithm or to provide a good initialization for deep feed-forward neural network classifiers, and are not considered as a standalone solution to classification problems. In this paper, we argue that RBMs provide a self-contained framework for deriving competitive non-linear classifiers. We present an evaluation of different learning algorithms for RBMs which aim at introducing a discriminative component to RBM training and improve their performance as classifiers. This approach is simple in that RBMs are used directly to build a classifier, rather than as a stepping stone. Finally, we demonstrate how discriminative RBMs can also be successfully employed in a semi-supervised setting.
Sleep stage classification using unsupervised feature learning Most attempts at training computers for the difficult and time-consuming task of sleep stage classification involve a feature extraction step. Due to the complexity of multimodal sleep data, the size of the feature space can grow to the extent that it is also necessary to include a feature selection step. In this paper, we propose the use of an unsupervised feature learning architecture called deep belief nets (DBNs) and show how to apply it to sleep data in order to eliminate the use of handmade features. Using a postprocessing step of hidden Markov model (HMM) to accurately capture sleep stage switching, we compare our results to a feature-based approach. A study of anomaly detection with the application to home environment data collection is also presented. The results using raw data with a deep architecture, such as the DBN, were comparable to a feature-based approach when validated on clinical datasets.
Deep learning for healthcare: review, opportunities and challenges. Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.
On the relations between stable and well-founded semantics of logic programs We study the relations between stable and well-founded semantics of logic programs. 1. We show that stable semantics can be defined in the same way as well-founded semantics based on the basic notion of unfounded sets. Hence, stable semantics can be considered as “two-valued well-founded semantics”. 2. An axiomatic characterization of stable and well-founded semantics of logic programs is given by a new completion theory, called strong completion . Similar to the Clark's completion, the strong completion can be interpreted in either two-valued or three-valued logic. We show that ◦ Two-valued strong completion specifies the stable semantics. ◦ Three-valued strong completion specifies the well-founded semantics. 3. We study the equivalence between stable semantics and well-founded semantics. At first, we prove the equivalence between the two semantics for strict programs. Then we introduce the bottom-stratified and top-strict condition generalizing both the stratifiability and the strictness, and show that the new condition is sufficient for the equivalence between stable and well-founded semantics. Further, we show that the call-consistency condition is sufficient for the existence of at least one stable model.
Conformant Planning via Model Checking . Conformant planning is the problem of nding a sequenceof actions that is guaranteed to achieve the goal for any possible initialstate and nondeterministic behavior of the planning domain. In this paperwe present a new approach to conformant planning. We propose analgorithm that returns the set of all conformant plans of minimal lengthif the problem admits a solution, otherwise it returns with failure. Ourwork is based on the planning via model checking paradigm, and relieson...
Editorial introduction to the Neural Networks special issue on Deep Learning of Representations.
1.02941
0.028593
0.028593
0.028593
0.028593
0.014323
0.010757
0.006361
0.001743
0.00002
0.000001
0
0
0
SAP speaks PDDL: exploiting a software-engineering model for planning in business process management Planning is concerned with the automated solution of action sequencing problems described in declarative languages giving the action preconditions and effects. One important application area for such technology is the creation of new processes in Business Process Management (BPM), which is essential in an ever more dynamic business environment. A major obstacle for the application of Planning in this area lies in the modeling. Obtaining a suitable model to plan with - ideally a description in PDDL, the most commonly used planning language - is often prohibitively complicated and/or costly. Our core observation in this work is that this problem can be ameliorated by leveraging synergies with model-based software development. Our application at SAP, one of the leading vendors of enterprise software, demonstrates that even one-to-one model re-use is possible. The model in question is called Status and Action Management (SAM). It describes the behavior of Business Objects (BO), i.e., large-scale data structures, at a level of abstraction corresponding to the language of business experts. SAM covers more than 400 kinds of BOs, each of which is described in terms of a set of status variables and how their values are required for, and affected by, processing steps (actions) that are atomic from a business perspective. SAM was developed by SAP as part of a major model-based software engineering effort. We show herein that one can use this same model for planning, thus obtaining a BPM planning application that incurs no modeling overhead at all. We compile SAM into a variant of PDDL, and adapt an off-the-shelf planner to solve this kind of problem. Thanks to the resulting technology, business experts may create new processes simply by specifying the desired behavior in terms of status variable value changes: effectively, by describing the process in their own language.
Planning-based configuration and management of distributed systems The configuration and runtime management of distributed systems is often complex due to the presence of a large number of configuration options and dependencies between interacting sub-systems. Inexperienced users usually choose default configurations because they are not aware of the possible configurations and/or their effect on the systems' operation. In doing so, they are unable to take advantage of the potentially wide range of system capabilities. Furthermore, managing inter-dependent sub-systems frequently involves performing a set of actions to get the overall system to the desired final state. In this paper, we propose a new approach for configuring and managing distributed systems based on AI planning. We use a goal-driven, tag-based user interaction paradigm to shield users from the complexities of configuring and managing systems. The key idea behind our approach is to package different configuration options and system management actions into reusable modules that can be automatically composed into workflows based on the user's goals. It also allows capturing the inter-dependencies between different configuration options, management actions and system states. We evaluate our approach in a case study involving three interdependent sub-systems. Our initial experiences indicate that this planning-based approach holds great promise in simplifying configuration and management tasks.
A planning based approach to failure recovery in distributed systems Failure recovery in distributed systems poses a difficult challenge because of the requirement for high availability. Failure scenarios are usually unpredictable so they can not easily be foreseen. In this research we propose a planning based approach to failure recovery. This approach automates failure recovery by capturing the state after failure, defining an acceptable recovered state as a goal and applying planning to get from the initial state to the goal state. By using planning, this approach can recover from a variety of failed states and reach any of several acceptable states: from minimal functionality to complete recovery.
Automated service composition using heuristic search Automated service composition is an important approach to automatically aggregate existing functionality. While different planning algorithms are applied in this area, heuristic search is currently not used. Lacking features like the creation of compositions with parallel or alternative control flow are preventing its application. The prospect of using heuristic search for composition with quality of service properties motivated the extension of existing heuristic search algorithms. In this paper we present a heuristic search algorithm for automated service composition. Based on the requirements for automated service composition, shortcomings of existing algorithms are identified, and solutions for them presented.
The first probabilistic track of the international planning competition The 2004 International Planning Competition, IPC-4, included a probabilistic planning track for the first time. We describe the new domain specification language we created for the track, our evaluation methodology, the competition domains we developed, and the results of the participating teams.
Automatic OBDD-based generation of universal plans in non-deterministic domains Most real world environments are non-deterministic. Automatic plan formation in non-deterministic domains is, however, still an open problem. In this paper we present a practical algorithm for the automatic generation of solutions to planning problems in nondeterministic domains. Our approach has the followmg main features. First, the planner generates Universal Plans. Second, it generates plans which are guaranteed to achieve the goal in spite of non-determinism, if such plans exist. Otherwise, the planner generates plans which encode iterative trial-and-error strategies (e.g. try to pick up a block until succeed), which are guaranteed to achieve the goal under the assumption that if there is a non-deterministic possibility for the iteration to terminate, this will not be ignored forever. Third, the implementation of the planner is based on symbolic model checking techniques which have been designed to explore efficiently large state spaces. The implementation exploits the compactness of OBDDS (Ordered Binary Decision Diagrams) to express in a practical way universal plans of extremely large size.
Complexity Results for Planning I describe several computational complexity results for planning, some of which identify tractable planning problems. The model of planning, called "propositional planning," is simple—conditions within operators are literals with no variables allowed. The different plan­ ning problems are defined by different restric- tions on the preconditions and postconditions of operators. The main results are: Proposi­ tional planning is PSPACE-complete, even if operators are restricted to two positive (non- negated) preconditions and two postconditions, or if operators are restricted to one postcondi­ tion (with any number of preconditions ). It is NP-complete if operators are restricted to positive postconditions, even if operators are restricted to one precondition and one posi­ tive postcondition. It is tractable in a few re­ stricted cases, one of which is if each opera­ tor is restricted to positive preconditions and one postcondition. The blocks-world problem, slightly modified, is a subproblem of this re­ stricted planning problem.
Logic programs with classical negation
On the undecidability of probabilistic planning and infinite-horizon partially observable Markov decision problems We investigate the computability of problems in probabilistic planning and partially observable infinite-horizon Markov decision processes. The undecidability of the string-existence problem for probabilistic finite automata is adapted to show that the following problem of plan existence in probabilistic planning is undecidable: given a probabilistic planning problem, determine whether there exists a plan with success probability exceeding a desirable threshold. Analogous policy-existence problems for partially observable infinite-horizon Markov decision processes under discounted and undiscounted total reward models, average-reward models, and state-avoidance models are all shown to be undecidable. The results apply to corresponding approximation problems as well.
Self-taught learning: transfer learning from unlabeled data We present a new machine learning framework called "self-taught learning" for using unlabeled data in supervised classification tasks. We do not assume that the unlabeled data follows the same class labels or generative distribution as the labeled data. Thus, we would like to use a large number of unlabeled images (or audio samples, or text documents) randomly downloaded from the Internet to improve performance on a given image (or audio, or text) classification task. Such unlabeled data is significantly easier to obtain than in typical semi-supervised or transfer learning settings, making self-taught learning widely applicable to many practical learning problems. We describe an approach to self-taught learning that uses sparse coding to construct higher-level features using the unlabeled data. These features form a succinct input representation and significantly improve classification performance. When using an SVM for classification, we further show how a Fisher kernel can be learned for this representation.
ARC: A Self-Tuning, Low Overhead Replacement Cache We consider the problem of cache management in a demand paging scenario with uniform page sizes. We propose a new cache management policy, namely, Adaptive Replacement Cache (ARC), that has several advantages.In response to evolving and changing access patterns, ARC dynamically, adaptively, and continually balances between the recency and frequency components in an online and self-tuning fashion. The policy ARC uses a learning rule to adaptively and continually revise its assumptions about the workload.The policy ARC is empirically universal, that is, it empirically performs as well as a certain fixed replacement policy-even when the latter uses the best workload-specific tuning parameter that was selected in an offline fashion. Consequently, ARC works uniformly well across varied workloads and cache sizes without any need for workload specific a priori knowledge or tuning. Various policies such as LRU-2, 2Q, LRFU, and LIRS require user-defined parameters, and, unfortunately, no single choice works uniformly well across different workloads and cache sizes.The policy ARC is simple-to-implement and, like LRU, has constant complexity per request. In comparison, policies LRU-2 and LRFU both require logarithmic time complexity in the cache size.The policy ARC is scan-resistant: it allows one-time se-quential requests to pass through without polluting the cache.On 23 real-life traces drawn from numerous domains, ARC leads to substantial performance gains over LRU for a wide range of cache sizes. For example, for a SPC1 like synthetic benchmark, at 4GB cache, LRU delivers a hit ratio of 9.19% while ARC achieves a hit ratio of 20%.
User interface declarative models and development environments: a survey Model-Based User Interface Development Environments (MB-UIDEs) provide a context within which user interface declarative models can be constructed and related, as part of the user interface design process. This paper provides a review of MB-UIDE technologies. A framework for describing the elements of a MB-UIDE is presented. A representative collection of 14 MB-UIDEs are selected, described in terms of the framework, compared and analysed from the information available in the literature. The framework can be used as an introduction to the MB-UIDE technology since it relates and provides a description for the terms used in MB-UIDE papers.
Representing the process semantics in the situation calculus This paper presents a formal method based on the high-level semantics of processes to reason about continuous change. With a case study we show how the semantics of processes can be integrated with the situation calculus. The soundness and completeness of situation calculus with respect to the process semantics are proven. Furthermore, the logical programming is implemented to support the semantics of processes with the situation calculus.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.11
0.12
0.11
0.005
0.002
0.000312
0.000023
0
0
0
0
0
0
0
Context-based prediction for road traffic state using trajectory pattern mining and recurrent convolutional neural networks. With the broad adoption of the global positioning system-enabled devices, the massive amount of trajectory data is generated every day, which results in meaningful traffic patterns. This study aims to predict the future state of road traffic instead of merely showing the current traffic condition. First, we use a clustering method to group similar trajectories of a particular period together into a cluster for each road. Second, for each cluster, we average the lengths and angles of the entire trajectories in the group as the representative trajectory, which is regarded as the road pattern. Third, we create a feature vector for each road based on its historical traffic conditions and neighbor road patterns. Finally, we design a recurrent convolutional neural network for modeling the complex nonlinear relationship among features to predict road traffic conditions. Experimental results show that our approach performs more favorably compared with several traditional machine learning and state-of-the-art algorithms.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Extracting distributed representations of concepts and relations from positive and negative propositions Linear relational embedding (LRE) was introduced previously by the authors (1999) as a means of extracting a distributed representation of concepts from relational data. The original formulation cannot use negative information and cannot properly handle data in which there are multiple correct answers. In this paper we propose an extended formulation of LRE that solves both these problems. We present results in two simple domains, which show that learning leads to good generalization
Nonlocal estimation of manifold structure. We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation invites an exploration of nonlocal manifold learning algorithms that attempt to discover shared structure in the tangent planes at different positions. A training criterion for such an algorithm is proposed, and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to generalize very far from training data (on learning handwritten character image rotations), where local nonparametric methods fail.
Global Data Analysis and the Fragmentation Problem in Decision Tree Induction We investigate an inherent limitation of top-down decision tree induction in which the continuous partitioning of the instance space progressively lessens the statistical support of every partial (i.e. disjunctive) hypothesis, known as the fragmentation problem. We show, both theoretically and empirically, how the fragmentation problem adversely affects predictive accuracy as variation (a measure of concept difficulty) increases. Applying feature-construction techniques at every tree node, which we implement on a decision tree inducer DALI, is proved to only partially solve the fragmentation problem. Our study illustrates how a more robust solution must also assess the value of each partial hypothesis by recurring to all available training data, an approach we name global data analysis, which decision tree induction alone is unable to accomplish. The value of global data analysis is evaluated by comparing modified versions of C4.5 rules with C4.5 trees and DALI, on both artificial and real-world domains. Empirical results suggest the importance of combining both feature construction and global data analysis to solve the fragmentation problem.
Rational Kernels: Theory and Algorithms Many classification algorithms were originally designed for fixed-size vectors. Recent applications in text and speech processing and computational biology require however the analysis of variable-length sequences and more generally weighted automata. An approach widely used in statistical learning techniques such as Support Vector Machines (SVMs) is that of kernel methods, due to their computational efficiency in high-dimensional feature spaces. We introduce a general family of kernels based on weighted transducers or rational relations, rational kernels , that extend kernel methods to the analysis of variable-length sequences or more generally weighted automata. We show that rational kernels can be computed efficiently using a general algorithm of composition of weighted transducers and a general single-source shortest-distance algorithm. Not all rational kernels are positive definite and symmetric (PDS), or equivalently verify the Mercer condition, a condition that guarantees the convergence of training for discriminant classification algorithms such as SVMs. We present several theoretical results related to PDS rational kernels. We show that under some general conditions these kernels are closed under sum, product, or Kleene-closure and give a general method for constructing a PDS rational kernel from an arbitrary transducer defined on some non-idempotent semirings. We give the proof of several characterization results that can be used to guide the design of PDS rational kernels. We also show that some commonly used string kernels or similarity measures such as the edit-distance, the convolution kernels of Haussler, and some string kernels used in the context of computational biology are specific instances of rational kernels. Our results include the proof that the edit-distance over a non-trivial alphabet is not negative definite, which, to the best of our knowledge, was never stated or proved before. Rational kernels can be combined with SVMs to form efficient and powerful techniques for a variety of classification tasks in text and speech processing, or computational biology. We describe examples of general families of PDS rational kernels that are useful in many of these applications and report the result of our experiments illustrating the use of rational kernels in several difficult large-vocabulary spoken-dialog classification tasks based on deployed spoken-dialog systems. Our results show that rational kernels are easy to design and implement and lead to substantial improvements of the classification accuracy.
Building continuous space language models for transcribing european languages Large vocabulary continuous speech recognizers for En- glish Broadcast News achieve today word error rates below 10%. An important factor for this succes is the availability of large amounts of acoustic and language modeling training data. In this paper the recognition of French Broadcast News and English and Spanish parliament speeches is addressed, tasks for which less resources are available. A neural network lan- guage model is applied that takes better advantage of the lim- ited amount of training data. This approach performs the esti- mation of the probabilities in a continuous space, allowing by this means smooth interpolations. Word error reduction of up to 0.9% absolute are reported with respect to a carefully tuned backoff language model trained on the same data.
Online Convex Programming and Generalized Infinitesimal Gradient Ascent. Convex programming involves a convex set F Rn and a convex cost function c : F ! R. The goal of convex programming is to nd a point in F which minimizes c. In online convex programming, the convex set is known in advance, but in each step of some repeated optimization problem, one must select a point inF before seeing the cost function for that step. This can be used to model factory production, farm production, and many other industrial optimization prob- lems where one is unaware of the value of the items produced until they have already been constructed. We introduce an algorithm for this domain. We also apply this algorithm to repeated games, and show that it is re- ally a generalization of innitesimal gradient ascent, and the results here imply that gen- eralized innitesimal gradient ascent (GIGA) is universally consistent.
Is Learning The N-Th Thing Any Easier Than Learning The First? This paper investigates learning in a lifelong context. Lifelong learning addresses situations in which a learner faces a whole stream of learn-ing tasks. Such scenarios provide the opportunity to transfer knowledge across multiple learning tasks, in order to generalize more accurately from less training data. In this paper, several different approaches to lifelong learning are described, and applied in an object recognition domain. It is shown that across the board, lifelong learning approaches generalize consistently more accurately from less training data, by their ability to transfer knowledge across learning tasks.
Decoding by linear programming This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f∈Rn from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1-minimization problem (||x||ℓ1:=Σi|xi|) min(g∈Rn) ||y - Ag||ℓ1 provided that the support of the vector of errors is not too large, ||e||ℓ0:=|{i:ei ≠ 0}|≤ρ·m for some ρ0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of ℓ1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.
Sparse deep belief net model for visual area V2 Motivated in part by the hierarchical organization of the cortex, a number of al- gorithms have recently been proposed that try to learn hierarchical, or "deep," structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed in visual area V1 (and the cochlea), little attempt has been made thus far to evaluate these algorithms in terms of their fidelity for mimicking computations at deeper levels in the cortical hier- archy. This paper presents an unsupervised learning model that faithfully mimics certain properties of visual area V2. Specifically, we develop a sparse variant of the deep belief networks of Hinton et al. (2006). We learn two layers of nodes in the network, and demonstrate that the first layer, similar to prior work on sparse coding and ICA, results in localized, oriented, edge filters, similar to the Gabor functions known to model V1 cell receptive fields. Further, the second layer in our model encodes correlations of the first layer responses in the data. Specifically, it picks up both colinear ("contour") features as well as corners and junctions. More interestingly, in a quantitative comparison, the encoding of these more complex "corner" features matches well with the results from the Ito & Komatsu's study of biological V2 responses. This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features.
3D Object Classification Using Deep Belief Networks Extracting features with strong expressive and discriminative ability is one of key factors for the effectiveness of 3D model classifier. Lots of research work has illustrated that deep belief networks (DBN) have enough power to represent the distributions of input data. In this paper, we apply DBN for extracting the features of 3D model. After implementing a contrastive divergence method, we obtain a trained-well DBN, which can powerfully represent the input data. Therefore, the feature from the output of last layer is acquired. This procedure is unsupervised. Due to the limit of labeled data, a semi-supervised method is utilized to recognize 3D objects using the feature obtained from the trained DBN. The experiments are conducted in the publicly available Princeton Shape Benchmark (PSB), and the experimental results demonstrate the effectiveness of our method.
Using Different Cost Functions to Train Stacked Auto-Encoders Deep neural networks comprise several hidden layers of units, which can be pre-trained one at a time via an unsupervised greedy approach. A whole network can then be trained (fine-tuned) in a supervised fashion. One possible pre-training strategy is to regard each hidden layer in the network as the input layer of an auto-encoder. Since auto-encoders aim to reconstruct their own input, their training must be based on some cost function capable of measuring reconstruction performance. Similarly, the supervised fine-tuning of a deep network needs to be based on some cost function that reflects prediction performance. In this work we compare different combinations of cost functions in terms of their impact on layer-wise reconstruction performance and on supervised classification performance of deep networks. We employed two classic functions, namely the cross-entropy (CE) cost and the sum of squared errors (SSE), as well as the exponential (EXP) cost, inspired by the error entropy concept. Our results were based on a number of artificial and real-world data sets.
Well founded semantics for logic programs with explicit negation . The aim of this paper is to provide asemantics for general logic programs (with negation bydefault) extended with explicit negation, subsumingwell founded semantics [22].The Well Founded semantics for extended logicprograms (WFSX) is expressible by a default theorysemantics we have devised [11]. This relationshipimproves the cross--fertilization between logic programsand default theories, since we generalize previousresults concerning their relationship [3, 4, 7, 1, 2],and there is...
Planning with sensing, concurrency, and exogenous events: logical framework and implementation The focus of current research in cognitive robotics is both on the realization of sys- tems based on known formal settings and on the extension of previous formal approaches to account for features that play a signifl- cant role for autonomous robots, but have not yet received an adequate treatment. In this paper we adopt a formal framework de- rived from Propositional Dynamic Logics by exploiting their formal correspondence with Description Logics, and present an extension of such a framework obtained by introducing both concurrency on primitive actions and autoepistemic operators for explicitly repre- senting the robot's epistemic state. We show that the resulting formal setting allows for the representation of actions with context- dependent efiects, sensing actions, and con- current actions, and address both the pres- ence of exogenous events and the characteri- zation of the notion of executable plan in such a complex setting. Moreover, we present an implementation of this framework in a system which is capable of generating plans that are actually executed on mobile robots, and illus- trate the experimentation of such a system in the design and implementation of soccer players for the 1999 Robocup competition.
Editorial introduction to the Neural Networks special issue on Deep Learning of Representations.
1.075502
0.050445
0.050445
0.050021
0.050021
0.025335
0.012553
0.005608
0.000505
0.000007
0.000002
0
0
0
Parametric Learning of Deep Convolutional Neural Network Deep neural networks have recently been showing great potential on visual recognition tasks. However, it is also considered difficult to tune its parameters, and it has high training cost. This work focuses on analysis of several learning methods and properties of multinomial logistic regression deep convolutional network. We implemented a scalable deep neural network, compared the efficiency of different methods and how parameters affect the learning process. We propose an efficient method of performing back-propagation with limited kernel functions on GPU and achieved better efficiency. Our conclusions can be applied to train deep networks more efficiently. We achieved the recognition rate of over 0.95 without image preprocessing and fine tuning, within 10 minutes on a single machine.
Towards adaptive learning with improved convergence of deep belief networks on graphics processing units In this paper we focus on two complementary approaches to significantly decrease pre-training time of a deep belief network (DBN). First, we propose an adaptive step size technique to enhance the convergence of the contrastive divergence (CD) algorithm, thereby reducing the number of epochs to train the restricted Boltzmann machine (RBM) that supports the DBN infrastructure. Second, we present a highly scalable graphics processing unit (GPU) parallel implementation of the CD-k algorithm, which boosts notably the training speed. Additionally, extensive experiments are conducted on the MNIST and the HHreco databases. The results suggest that the maximum useful depth of a DBN is related to the number and quality of the training samples. Moreover, it was found that the lower-level layer plays a fundamental role for building successful DBN models. Furthermore, the results contradict the pre-conceived idea that all the layers should be pre-trained. Finally, it is shown that by incorporating multiple back-propagation (MBP) layers, the DBNs generalization capability is remarkably improved.
Kernel Methods for Deep Learning. We introduce a new family of positive-definite kernel functions that mimic the computation in large, multilayer neural nets. These kernel functions can be used in shallow architectures, such as support vector machines (SVMs), or in deep kernel-based architectures that we call multilayer kernel machines (MKMs). We evaluate SVMs and MKMs with these kernel functions on problems designed to illustrate the advantages of deep architectures. On several problems, we obtain better results than previous, leading benchmarks from both SVMs with Gaussian kernels as well as deep belief nets.
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.
Extended stable semantics for normal and disjunctive programs
A neural probabilistic language model A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.
On the scale and performance of cooperative Web proxy caching Abstract While algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative- caching performance,in the large-scale World Wide Web en- vironment. This paper uses both trace-based analysis and analytic modelling,to show,the potential advantages and drawbacks of inter-proxy cooperation. With our traces, we evaluate quantitatively the performance-improvement po- tential of cooperation between 200 small-organization prox- ies within a university environment, and between two large- organization proxies handling 23,000 and 60,000 clients, re- spectively. With our model, we extend beyond these popula- tions to project cooperative caching behavior in regions with millions of clients. Overall, we demonstrate that cooperative caching has performance,benefits only within limited popu- lation bounds. We also use our model to examine the impli- cations of future trends in Web-access behavior and traffic.
A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.
The Boolean hierarchy: hardware over NP In this paper, we study the complexity of sets formed by boolean operations $(\bigcup, \bigcap,$ and complementation) on NP sets. These are the sets accepted by trees of hardware with NP predicates as leaves, and together form the boolean hierarchy. We present many results about the boolean hierarchy: separation and immunity results, complete languages, upward separations, connections to sparse oracles for NP, and structural asymmetries between complementary classes. Some results present new ideas and techniques. Others put previous results about NP and $D^{P}$ in a richer perspective. Throughout, we emphasize the structure of the boolean hierarchy and its relations with more common classes.
Fine-Grained Mobility in the Emerald System (Extended Abstract)
Normal forms for answer sets programming Normal forms for logic programs under stable/answer set semantics are introduced. We argue that these forms can simplify the study of program properties, mainly consistency. The first normal form, called the kernel of the program, is useful for studying existence and number of answer sets. A kernel program is composed of the atoms which are undefined in the Well-founded semantics, which are those that directly affect the existence of answer sets. The body of rules is composed of negative literals only. Thus, the kernel form tends to be significantly more compact than other formulations. Also, it is possible to check consistency of kernel programs in terms of colorings of the Extended Dependency Graph program representation which we previously developed. The second normal form is called 3-kernel. A 3-kernel program is composed of the atoms which are undefined in the Well-founded semantics. Rules in 3-kernel programs have at most two conditions, and each rule either belongs to a cycle, or defines a connection between cycles. 3-kernel programs may have positive conditions. The 3-kernel normal form is very useful for the static analysis of program consistency, i.e. the syntactic characterization of existence of answer sets. This result can be obtained thanks to a novel graph-like representation of programs, called Cycle Graph which presented in the companion article Costantini (2004b).
ARIMA time series modeling and forecasting for adaptive I/O prefetching Bursty application I/O patterns, together with transfer limited storage devices, combine to create a major I/O bottleneck on parallel systems. This paper explores the use of time series models to forecast application I/O request times, then prefetching I/O requests during computation intervals to hide I/O latency. Experimental results with I/O intensive scientific codes show performance improvements compared to standard UNIX prefetching strategies.
Scheduling parallel I/O operations The I/O bottleneck in parallel computer systems has recently begun receiving increasing interest. Most attention has focused on improving the performance of I/O devices using fairly low-level parallelism in techniques such as disk striping and interleaving. Widely applicable solutions, however, will require an integrated approach which addresses the problem at multiple system levels, including applications, systems software, and architecture. We propose that within the context of such an integrated approach, scheduling parallel I/O operations will become increasingly attractive and can potentially provide substantial performance benefits.We describe a simple I/O scheduling problem and present approximate algorithms for its solution. The costs of using these algorithms in terms of execution time, and the benefits in terms of reduced time to complete a batch of I/O operations, are compared with the situations in which no scheduling is used, and in which an optimal scheduling algorithm is used. The comparison is performed both theoretically and experimentally. We have found that, in exchange for a small execution time overhead, the approximate scheduling algorithms can provide substantial improvements in I/O completion times.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.2
0.04
0.009524
0.00084
0
0
0
0
0
0
0
0
0
0
A generator of direct manipulation office systems A system for generating direct manipulation office systems is described. In these systems, the user directly manipulates graphical representations of office entities instead of dealing with these entities abstractly through a command language or menu system. These systems employ a new semantic data model to describe office entities. New techniques based on attribute grammars and incremental attribute evaluation are used to implement this data model in an efficient manner. In addition, the system provides a means of generating sophisticated graphics-based user interfaces that are integrated with the underlying semantic model. Finally, the generated systems contain a general user reversal and recovery (or undo) mechanism that allows them to be much more tolerant of human errors.
Computer-Aided Design of User Interfaces I, Proceedings of the Second International Workshop on Computer-Aided Design of User Interfaces, June 5-7, 1996, Namur, Belgium
From OOA to GUIs: The JANUS System
Specification and Generation of User Interfaces with the BOSS-System . The paper describes the BOSS--System belonging to the classof model based user interface construction tools, which generate an executableuser interface out of a declarative description (model) of aninteractive application. BOSS gains a rather high level of usability dueto a couple of reasons: BOSS employs an encompassing specificationmodel (HIT, Hierarchic Interaction graph Templates) which allows thedeclarative description of all parts of the model of an interactive application...
Management of interface design in humanoid Today's interface design tools either force designers to handle a tremendous number of design details, or limit their control over design decisions. Neither of these approaches taps the true strengths of either human designers or computers in the design process. This paper presents a human-computer collaborative system that uses a model-based approach for interface design to help designers on decision making, and utilizes the bookkeeping capabilities of computers for regular and tedious tasks. We describe (a) the underlying modeling technique and an execution environment that allows even incompletely-specified designs to be executed for evaluation and testing purposes, and (b) a tool that decomposes high-level design goals into the necessary implementation steps, and helps designers manage the myriad of details that arise during design.
The FUSE-System: an Integrated User Interface Design Environment With the FUSE (Formal User Interface Specification Environment)-System we pre- sent a methodology and a set of integrated tools for the automatic generation of graphical user interfaces. FUSE provides tool-based support for all phases (task-, user-, problem domain analysis, design of the logical user interface, design of user interface in a particular layout style) of the user interface development process. Based on a formal specification of dialogue- and layout guidelines, FUSE allows the automatic generation of user interfaces out of specifications of the task-, problem domain- and user-model. Moreover, the FUSE-System incorporates a component for the automatic generation of powerful help- and user guidance components. In this paper, we describe the FUSE-methodology by modelling user interfaces of an ISDN phone simulation. Furthermore, the two major components of FUSE (BOSS, PLUG-IN) are presented: The BOSS-System supports the design of the logical user interface and the formal specification of layout guidelines. PLUG-IN generates task-based help- and user guidance components.
Multi-threading and remote latency in software DSMs This paper evaluates the use of per-node multi-threading to hide remote memory and synchronization latencies in a software DSM. As with hardware systems, multi-threading in software systems can be used to reduce the costs of remote requests by switching threads when the current thread blocks. We added multi-threading to the CVM software DSM and evaluated its impact on performance for a suite of common shared memory programs. Multi-threading resulted in speed improvements of at least 17% in three of the seven applications in our suite, and lesser improvements in the other applications. However, we found that: good performance is not always achievable transparently for non-trivial applications; multi-threading can negatively interact with DSM operations; multi-threading decreases cache and TLB locality; and any multi-threading speedup is dependent on available work.
I/O reference behavior of production database workloads and the TPC benchmarks—an analysis at the logical level As improvements in processor performance continue to far outpace improvements in storage performance, I/O is increasingly the bottleneck in computer systems, especially in large database systems that manage huge amoungs of data. The key to achieving good I/O performance is to thoroughly understand its characteristics. In this article we present a comprehensive analysis of the logical I/O reference behavior of the peak productiondatabase workloads from ten of the world's largest corporations. In particular, we focus on how these workloads respond to different techniques for caching, prefetching, and write buffering. Our findings include several broadly applicable rules of thumb that describe how effective the various I/O optimization techniques are for the production workloads. For instance, our results indicate that the buffer pool miss ratio tends to be related to the ratio of buffer pool size to data size by an inverse square root rule. A similar fourth root rule relates the write miss ratio and the ration of buffer pool size to data size.In addition, we characterize the reference characteristics of workloads similar to the Transaction Processing Performance Council (TPC) benchmarks C (TPC-C) and D(TPC-D), which are de facto standard performance measures for online transaction processing (OLTP) systems and decision support systems (DSS), respectively. Since benchmarks such as TPC-C and TPC-D can only be used effectively if their strengths and limitations are understood, a major focus of our analysis is to identify aspects of the benchmarks that stress the system differently than the production workloads. We discover that for the most part, the reference behavior of TPC-C and TPC-D fall within the range of behavior exhibited by the production workloads. However, there are some noteworthy exceptions that affect well-known I/O optimization techniques such as caching (LRU is further from the optimal for TPC-C, while there is little sharing of pages between transactions for TPC-D), prefetching (TPC-C exhibits no significant sequentiality), and write buffering (write buffering is lees effective for the TPC benchmarks). While the two TPC benchmarks generally complement one another in reflecting the characteristics of the production workloads, there remain aspects of the real workloads that are not represented by either of the benchmarks.
Extended ephemeral logging: log storage management for applications with long lived transactions Extended ephemeral logging (XEL) is a new technique for managing a log of database activity subject to the general assumption that the lifetimes of an application’s transactions may be statistically distributed over a wide range. The log resides on nonvolatile disk storage and provides fault tolerance to system failures (in which the contents of volatile main memory storage may be lost). XEL segments a log into a chain of fixed-size FIFO queues and performs generational garbage collection on records in the log. Log records that are no longer necessary for recovery purposes are “thrown away” when they reach the head of a queue; only records that are still needed for recovery are forwarded from the head of one queue to the tail of the next. XEL does not require checkpoints, permits fast recovery after a crash and is well suited for applications that have a wide distribution of transaction lifetimes. Quantitative evaluation of XEL via simulation indicates that it can significantly reduce the disk space required for the log, at the expense of slightly higher bandwidth for log information and more main memory; the reduced size of the log permits much faster recovery after a crash as well as cost savings. XEL can significantly reduce both the disk space and the disk bandwidth required for log information in a system that has been augmented with a nonvolatile region of main memory.
Hybrid-range partitioning strategy: a new declustering strategy for multiprocessor databases machines In shared-nothing multiprocessor database machines, the relational operators that form a query are executed on the processors where the relations they reference are stored. In general, as the number of processors over which a relation is declustered is increased, the execution time for the query is decreased because more processors are used, each of which has to process fewer tuples. However, for some queries increasing the degree of declustering actually increases the query's response time as the result of increased overhead for query startup, com- munication, and termination. In general, the declustering strategy selected for a relation can have a significant impact on the overall performance of the system. This paper presents the hybrid-range partitioning strategy, a new declustering strategy for multiproces- sor database machines. In addition to describing its characteris- tics and operation, its performance is compared to that of the current partitioning strategies provided by the Gamma database machine.
Systemic Nonlinear Planning This paper presents a simple, sound, complete, and systematic algorithm for domain independent STRIPS planning. Simplicity is achieved by starting with a ground procedure and then applying a general, and independently verifiable, lifting transformation. Previous planners have been designed directly as lifted procedures. Our ground procedure is a ground version of Tate''s NONLIN procedure. In Tate''s procedure one is not required to determine whether a prerequisite of a step in an unfinished plan is guaranteed to hold in all linearizations. This allows Tate''s procedure to avoid the use of Chapman''s modal truth criterion. Systematicity is the property that the same plan, or partial plan, is never examined more than once.
What regularized auto-encoders learn from the data-generating distribution. What do auto-encoders learn about the underlying data-generating distribution? Recent work suggests that some auto-encoder variants do a good job of capturing the local manifold structure of data. This paper clarifies some of these previous observations by showing that minimizing a particular form of regularized reconstruction error yields a reconstruction function that locally characterizes the shape of the data-generating density. We show that the auto-encoder captures the score (derivative of the log-density with respect to the input). It contradicts previous interpretations of reconstruction error as an energy function. Unlike previous results, the theorems provided here are completely generic and do not depend on the parameterization of the auto-encoder: they show what the auto-encoder would tend to if given enough capacity and examples. These results are for a contractive training criterion we show to be similar to the denoising auto-encoder training criterion with small corruption noise, but with contraction applied on the whole reconstruction function rather than just encoder. Similarly to score matching, one can consider the proposed training criterion as a convenient alternative to maximum likelihood because it does not involve a partition function. Finally, we show how an approximate Metropolis-Hastings MCMC can be setup to recover samples from the estimated distribution, and this is confirmed in sampling experiments.
Two-Layer Multiple Kernel Learning.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.053968
0.054226
0.037004
0.028003
0.019587
0.002528
0.000011
0.000002
0
0
0
0
0
0
Avoiding Degradation In Deep Feed-Forward Networks By Phasing Out Skip-Connections A widely observed phenomenon in deep learning is the degradation problem: increasing the depth of a network leads to a decrease in performance on both test and training data. Novel architectures such as ResNets and Highway networks have addressed this issue by introducing various flavors of skip-connections or gating mechanisms. However, the degradation problem persists in the context of plain feed-forward networks. In this work we propose a simple method to address this issue. The proposed method poses the learning of weights in deep networks as a constrained optimization problem where the presence of skip-connections is penalized by Lagrange multipliers. This allows for skip-connections to be introduced during the early stages of training and subsequently phased out in a principled manner. We demonstrate the benefits of such an approach with experiments on MNIST, fashion-MNIST, CIFAR-10 and CIFAR-100 where the proposed method is shown to greatly decrease the degradation effect and is often competitive with ResNets.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Automatic Derivation and Application of Induction Schemes for Mutually Recursive Functions This paper advocates and explores the use of multipredicate induction schemes for proofs about mutually recursive functions. The interactive application of multi-predicate schemes stemming from datatype definitions is already well-established practice; this paper describes an automated proof procedure based on multi-predicate schemes. Multipredicate schemes may be formally derived from (mutually recursive) function definitions; such schemes are often helpful in proving properties of mutually recursive functions where the recursion pattern does not follow that of the underlying datatypes. These ideas have been implemented using the HOL theorem prover and the Clam proof planner.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Integrating Planning And Scheduling Based On Genetic Algorithms To An Workflow System Workflow systems have been widely employed; however, the automatic generation of process models, specially applied in the Web Service domains is still an area to be explored. The Web services composition in Workflow is an emerging paradigm that allows the interaction both of internal applications and of applications that go beyond organizational boundaries. In this context, this paper presents an architecture based on the use of a genetic planner in order to obtain the automatic generation of models for Web Service Workflow. A simulation environment is also proposed by using scheduling techniques based on the use of genetic algorithms to select the most suitable process model.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Evolution of Abstraction Across Layers in Deep Learning Neural Networks. Deep learning neural networks produce excellent results in various pattern recognition tasks. It is of great practical importance to answer some open questions regarding model design and parameterization, and to understand how input data are converted into meaningful knowledge at the output. The layer-by-layer evolution of the abstraction level has been proposed previously as a quantitative measure to describe the emergence of knowledge in the network. In this work we systematically evaluate the abstraction level for a variety of image datasets. We observe that there is a general tendency of increasing abstraction from input to output with the exception of a drop of abstraction at some ReLu and Pooling layers. The abstraction level is relatively low and does not change significantly in the first few layers following the input, while it fluctuates around some high saturation value at the layers preceding the output. Finally, the layer-by-layer change in abstraction is not normally distributed, rather it approximates an exponential distribution. These results point to salient local features of deep layers impacting overall (global) classification performance. We compare the results extracted from deep learning neural networks performing image processing tasks with the results obtained by analyzing brain imaging data. Our conclusions may be helpful in future designs of more efficient, compact deep learning neural networks.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An Efficient Buffer Replacement Algorithm for NAND Flash Storage Devices NAND flash storage devices are now revolutionizing storage stacks. As their capacities are rapidly increasing, they are now being adopted virtually in all classes of computing devices, including mobile devices, desktop computers, and cloud servers. However, there are two remaining problems: first, as flash density increases, the lifetime of the storage medium decreases rapidly. Second, random writes significantly decrease performance and lifetime since they generate more hidden writes inside NAND flash storage devices. In this paper, we propose a novel buffer replacement algorithm called TS-CLOCK, to address those two problems. TS-CLOCK exploits temporal locality to keep the cache hit ratio high and also exploits spatial locality to maintain evicted writes flash-friendly. The key idea of our flash-friendly eviction is that, when evicting a dirty page, TS-CLOCK first selects a flash block with the largest number of dirty pages that are least likely to be accessed and then sequentially evicts pages in the block. Since it generates pseudo sequential writes for a flash block, it significantly increases performance and lifetime at once by reducing the number of hidden writes. We have implemented TS-CLOCK and compared it with seven replacement algorithms, including traditional and flash-aware ones, for several real workloads. Our experimental results show that TS-CLOCK outperforms the state-of-the-art replacement algorithm, Sp. Clock, by 30% on the NAND flash storage devices and extends the lifetime by 53%.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Convergent Block Coordinate Descent for Training Tikhonov Regularized Deep Neural Networks By lifting the ReLU function into a higher dimensional space, we develop a smooth multi-convex formulation for training feed-forward deep neural networks (DNNs). This allows us to develop a block coordinate descent (BCD) training algorithm consisting of a sequence of numerically well-behaved convex optimizations. Using ideas from proximal point methods in convex analysis, we prove that this BCD algorithm will converge globally to a stationary point with R-linear convergence rate of order one. In experiments with the MNIST database, DNNs trained with this BCD algorithm consistently yielded better test-set error rates than identical DNN architectures trained via all the stochastic gradient descent (SGD) variants in the Caffe toolbox.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Design and evaluation of fault-tolerant shared file system for cluster systems The paper describes the design and evaluation of a Fault Tolerant Shared File System (FTSFS) architecture for cluster systems with shared disks. The FTSFS architecture: guarantees no file system (FS) structure crashes on processor/program failure; can be applied to any existing nonshared FS without changing the structure of the FS; and does not degrade performance on the shared FS compared with a standard non shared FS. Using the FTSFS architecture, we implemented a fault tolerant shared FS on Fujitsu's SVR4 duplex system, and evaluated the system performance. The evaluation showed that the shared FS is competitive in performance with the standard SVR4-UFS (Unix File System)
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Learning to Mix n-Step Returns: Generalizing lambda-Returns for Deep Reinforcement Learning. Reinforcement Learning (RL) can model complex behavior policies for goal-directed sequential decision making tasks. A hallmark of RL algorithms is Temporal Difference (TD) learning: value function for the current state is moved towards a bootstrapped target that is estimated using the next stateu0027s value function. lambda-returns define the target of the RL agent as a weighted combination of rewards estimated by using multiple many-step look-aheads. Although mathematically tractable, the use of exponentially decaying weighting of n-step returns based targets in lambda-returns is a rather ad-hoc design choice. Our major contribution is that we propose a generalization of lambda-returns called Confidence-based Autodidactic Returns (CAR), wherein the RL agent learns the weighting of the n-step returns in an end-to-end manner. In contrast to lambda-returns wherein the RL agent is restricted to use an exponentially decaying weighting scheme, CAR allows the agent to learn to decide how much it wants to weigh the n-step returns based targets. Our experiments, in addition to showing the efficacy of CAR, also empirically demonstrate that using sophisticated weighted mixtures of multi-step returns (like CAR and lambda-returns) considerably outperforms the use of n-step returns. We perform our experiments on the Asynchronous Advantage Actor Critic (A3C) algorithm in the Atari 2600 domain.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Domain Adaptation via Transfer Component Analysis Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.
Deep Visual Domain Adaptation: A Survey. Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.
Histogram Of Template For Human Detection In this paper, we propose a novel feature named histogram of template (HOT) for human detection in still images. For every pixel of an image, various templates are defined, each of which contains the pixel itself and two of its neighboring pixels. If the intensity and gradient values of the three pixels satisfy a pre-defined function, the central pixel is regarded to meet the corresponding template for this function. Histograms of pixels meeting various templates are calculated for a set of functions, and combined to be the feature for detection. Compared to the other features, the proposed feature takes intensity as well as the gradient information into consideration. Besides, it reflects the relationship between 3 pixels, instead of focusing on only one. Experiments for human detection are performed on INRIA dataset, which shows the proposed HOT feature is more discriminative than histogram of orientated gradient (HOG) feature, under the same training method.
A survey on heterogeneous transfer learning. Transfer learning has been demonstrated to be effective for many real-world applications as it exploits knowledge present in labeled training data from a source domain to enhance a model’s performance in a target domain, which has little or no labeled target training data. Utilizing a labeled source, or auxiliary, domain for aiding a target task can greatly reduce the cost and effort of collecting sufficient training labels to create an effective model in the new target distribution. Currently, most transfer learning methods assume the source and target domains consist of the same feature spaces which greatly limits their applications. This is because it may be difficult to collect auxiliary labeled source domain data that shares the same feature space as the target domain. Recently, heterogeneous transfer learning methods have been developed to address such limitations. This, in effect, expands the application of transfer learning to many other real-world tasks such as cross-language text categorization, text-to-image classification, and many others. Heterogeneous transfer learning is characterized by the source and target domains having differing feature spaces, but may also be combined with other issues such as differing data distributions and label spaces. These can present significant challenges, as one must develop a method to bridge the feature spaces, data distributions, and other gaps which may be present in these cross-domain learning tasks. This paper contributes a comprehensive survey and analysis of current methods designed for performing heterogeneous transfer learning tasks to provide an updated, centralized outlook into current methodologies.
What you saw is not what you get: Domain adaptation using asymmetric kernel transforms In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks.
Tabula rasa: Model transfer for object category detection Our objective is transfer training of a discriminatively trained object category detector, in order to reduce the number of training images required. To this end we propose three transfer learning formulations where a template learnt previously for other categories is used to regularize the training of a new category. All the formulations result in convex optimization problems. Experiments (on PASCAL VOC) demonstrate significant performance gains by transfer learning from one class to another (e.g. motorbike to bicycle), including one-shot learning, specialization from class to a subordinate class (e.g. from quadruped to horse) and transfer using multiple components. In the case of multiple training samples it is shown that a detection performance approaching that of the state of the art can be achieved with substantially fewer training samples.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Preliminary investigation of Boltzmann machine classifiers for speaker recognition.
Tensor Deep Stacking Networks A novel deep architecture, the tensor deep stacking network (T-DSN), is presented. The T-DSN consists of multiple, stacked blocks, where each block contains a bilinear mapping from two hidden layers to the output layer, using a weight tensor to incorporate higher order statistics of the hidden binary (½0; 1) features. A learning algorithm for the T-DSN’s weight matrices and tensors is developed and described in which the main parameter estimation burden is shifted to a convex subproblem with a closed-form solution. Using an efficient and scalable parallel implementation for CPU clusters, we train sets of T-DSNs in three popular tasks in increasing order of the data size: handwritten digit recognition using MNIST (60k), isolated state/phone classification and continuous phone recognition using TIMIT (1.1 m), and isolated phone classification using WSJ0 (5.2 m). Experimental results in all three tasks demonstrate the effectiveness of the T-DSN and the associated learning methods in a consistent manner. In particular, a sufficient depth of the T-DSN, a symmetry in the two hidden layers structure in each T-DSN block, our model parameter learning algorithm, and a softmax layer on top of T-DSN are shown to have all contributed to the low error rates observed in the experiments for all three tasks.
Empirical Analysis of Predictive Algorithms for Collaborative Filtering Collaborative filtering or recommender systemsuse a database about user preferences topredict additional topics or products a newuser might like. In this paper we describe several algorithms designed for this task, including techniques based on correlation coefficients,vector-based similarity calculations,and statistical Bayesian methods. We comparethe predictive accuracy of the various methods in a set of representative problemdomains. We use two basic classes of evaluation...
The complexity of acyclic conjunctive queries This paper deals with the evaluation of acyclic Booleanconjunctive queries in relational databases. By well-known resultsof Yannakakis[1981], this problem is solvable in polynomial time;its precise complexity, however, has not been pinpointed so far. Weshow that the problem of evaluating acyclic Boolean conjunctivequeries is complete for LOGCFL, the class of decision problems thatare logspace-reducible to a context-free language. Since LOGCFL iscontained in AC1 and NC2, the evaluation problem of acyclic Booleanconjunctive queries is highly parallelizable. We present a paralleldatabase algorithm solving this problem with alogarithmic number ofparallel join operations. The algorithm is generalized to computingthe output of relevant classes of non-Boolean queries. We also showthat the acyclic versions of the following well-known database andAI problems are all LOGCFL-complete: The Query Output Tuple problemfor conjunctive queries, Conjunctive Query Containment, ClauseSubsumption, and Constraint Satisfaction. The LOGCFL-completenessresult is extended to the class of queries of bounded tree widthand to other relevant query classes which are more general than theacyclic queries.
Architectural implications of quantum computing technologies In this article we present a classification scheme for quantum computing technologies that is based on the characteristics most relevant to computer systems architecture. The engineering trade-offs of execution speed, decoherence of the quantum states, and size of systems are described. Concurrency, storage capacity, and interconnection network topology influence algorithmic efficiency, while quantum error correction and necessary quantum state measurement are the ultimate drivers of logical clock speed. We discuss several proposed technologies. Finally, we use our taxonomy to explore architectural implications for common arithmetic circuits, examine the implementation of quantum error correction, and discuss cluster-state quantum computation.
Reliable Communication in VPL . We compare different degrees of architecture abstraction andcommunication reliability in distributed programming languages. A nearlyarchitecture independent logic programming language and system with reliablecommunication, called VPL (Vienna Parallel Logic) is presented. Wepoint out the contradiction between complete architecture independence andreliable high-level communication in programming languages. The descriptionof an implementation technique of VPL's reliable communication on...
Secure the Cloud: From the Perspective of a Service-Oriented Organization In response to the revival of virtualized technology by Rosenblum and Garfinkel [2005], NIST defined cloud computing, a new paradigm in service computing infrastructures. In cloud environments, the basic security mechanism is ingrained in virtualization—that is, the execution of instructions at different privilege levels. Despite its obvious benefits, the caveat is that a crashed virtual machine (VM) is much harder to recover than a crashed workstation. When crashed, a VM is nothing but a giant corrupt binary file and quite unrecoverable by standard disk-based forensics. Therefore, VM crashes should be avoided at all costs. Security is one of the major contributors to such VM crashes. This includes compromising the hypervisor, cloud storage, images of VMs used infrequently, and remote cloud client used by the customer as well as threat from malicious insiders. Although using secure infrastructures such as private clouds alleviate several of these security problems, most cloud users end up using cheaper options such as third-party infrastructures (i.e., private clouds), thus a thorough discussion of all known security issues is pertinent. Hence, in this article, we discuss ongoing research in cloud security in order of the attack scenarios exploited most often in the cloud environment. We explore attack scenarios that call for securing the hypervisor, exploiting co-residency of VMs, VM image management, mitigating insider threats, securing storage in clouds, abusing lightweight software-as-a-service clients, and protecting data propagation in clouds. Wearing a practitioner's glasses, we explore the relevance of each attack scenario to a service company like Infosys. At the same time, we draw parallels between cloud security research and implementation of security solutions in the form of enterprise security suites for the cloud. We discuss the state of practice in the form of enterprise security suites that include cryptographic solutions, access control policies in the cloud, new techniques for attack detection, and security quality assurance in clouds.
1.02301
0.022222
0.01271
0.012469
0.010241
0.005358
0.000698
0.000016
0.000003
0
0
0
0
0
The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets. Machine learning models based on neural networks and deep learning are being rapidly adopted for many purposes. What those models learn, and what they may share, is a significant concern when the training data may contain secrets and the models are public -- e.g., when a model helps users compose text messages using models trained on all usersu0027 messages. This paper presents exposure: a simple-to-compute metric that can be applied to any deep learning model for measuring the memorization of secrets. Using this metric, we show how to extract those secrets efficiently using black-box API access. Further, we show that unintended memorization occurs early, is not due to over-fitting, and is a persistent issue across different types of models, hyperparameters, and training strategies. We experiment with both real-world models (e.g., a state-of-the-art translation model) and datasets (e.g., the Enron email dataset, which contains usersu0027 credit card numbers) to demonstrate both the utility of measuring exposure and the ability to extract secrets. Finally, we consider many defenses, finding some ineffective (like regularization), and others to lack guarantees. However, by instantiating our own differentially-private recurrent model, we validate that by appropriately investing in the use of state-of-the-art techniques, the problem can be resolved, with high utility.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Training Products of Experts by Minimizing Contrastive Divergence It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual “expert” models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called “contrastive divergence” whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data.
When Does a Mixture of Products Contain a Product of Mixtures? We derive relations between theoretical properties of restricted Boltzmann machines (RBMs), popular machine learning models which form the building blocks of deep learning models, and several natural notions from discrete mathematics and convex geometry. We give implications and equivalences relating RBM-representable probability distributions, perfectly reconstructible inputs, Hamming modes, zonotopes and zonosets, point configurations in hyperplane arrangements, linear threshold codes, and multicovering numbers of hypercubes. As a motivating application, we prove results on the relative representational power of mixtures of product distributions and products of mixtures of pairs of product distributions (RBMs) that formally justify widely held intuitions about distributed representations. In particular, we show that a mixture of products requiring an exponentially larger number of parameters is needed to represent the probability distributions which can be obtained as products of mixtures.
Learning Multilevel Distributed Representations for High-Dimensional Sequences We describe a new family of non-linear sequence models that are substantially more powerful than hidden Markov models or linear dynamical sys- tems. Our models have simple approximate in- ference and learning procedures that work well in practice. Multilevel representations of sequen- tial data can be learned one hidden layer at a time, and adding extra hidden layers improves the resulting generative models. The models can be trained with very high-dimensional, very non-linear data such as raw pixel sequences. Their performance is demonstrated using syn- thetic video sequences of two balls bouncing in a box.
Learning Invariant Color Features With Sparse Topographic Restricted Boltzmann Machines Our objective is to learn invariant color features directly from data via unsupervised learning. In this paper, we introduce a method to regularize restricted Boltzmann machines during training to obtain features that are sparse and topographically organized. Upon analysis, the features learned are Gabor-like and demonstrate a coding of orientation, spatial position, frequency and color that vary smoothly with the topography of the feature map. There is also differentiation between monochrome and color filters, with some exhibiting color-opponent properties. We also found that the learned representation is more invariant to affine image transformations and changes in illumination color.
Connections between score matching, contrastive divergence, and pseudolikelihood for continuous-valued variables. Score matching (SM) and contrastive divergence (CD) are two recently proposed methods for estimation of nonnormalized statistical methods without computation of the normalization constant (partition function). Although they are based on very different approaches, we show in this letter that they are equivalent in a special case: in the limit of infinitesimal noise in a specific Monte Carlo method. Further, we show how these methods can be interpreted as approximations of pseudolikelihood.
A priori trust inference with context-aware stereotypical deep learning We have proposed CAST, a new context-aware stereotypical trust model.We have considered a comprehensive set of seven context-aware stereotypes.We have applied a deep learning architecture to keep trust stereotyping robust.We have confirmed the effectiveness of CAST using a rich set of experiments. In multi-agent systems, stereotypical trust models are widely used to bootstrap a priori trust in case historical trust evidences are unavailable. These models can work well if and only if malicious agents share some common features (i.e., stereotypes) in their profiles and these features can be detected. However, this condition may not hold for all the adversarial scenarios. Smart attackers can show different trustworthiness to different agents and services (i.e., launching context-correlated attacks). In this paper, we propose CAST, a novel Context-Aware Stereotypical Trust deep learning framework. CAST coins a comprehensive set of seven context-aware stereotypes, each of which can capture a unique type of context-correlated attacks, as well as a deep learning architecture to keep the trust stereotyping robust (i.e., resist training errors). The basic idea is to construct a multi-layer perceptive structure to learn the latent correlations between context-aware stereotypes and the trustworthiness, and thus can estimate the new trust by taking into account the context information. We have evaluated CAST using a rich set of experiments over a simulated multi-agent system. The experimental results have successfully confirmed that, our CAST can achieve approximately tens of times higher trust inference accuracy in average than the competing algorithms in the presence of context-correlated attacks, and more importantly can maintain a much better trust inference robustness against stereotyping errors.
How to Center Binary Restricted Boltzmann Machines.
Deep Learning For Emotional Speech Recognition Emotional speech recognition is a multidisciplinary research area that has received increasing attention over the last few years. The present paper considers the application of restricted Boltzmann machines (RBM) and deep belief networks (DBN) to the difficult task of automatic Spanish emotional speech recognition. The principal motivation lies in the success reported in a growing body of work employing these techniques as alternatives to traditional methods in speech processing and speech recognition. Here a well-known Spanish emotional speech database is used in order to extensively experiment with, and compare, different combinations of parameters and classifiers. It is found that with a suitable choice of parameters, RBM and DBN can achieve comparable results to other classifiers.
The effects of adding noise during backpropagation training on a generalization performance We study the effects of adding noise to the inputs, outputs, weight connections, and weight changes of multilayer feedforward neural networks during backpropagation training. We rigorously derive and analyze the objective functions that are minimized by the noise-affected training processes. We show that input noise and weight noise encourage the neural-network output to be a smooth function of the input or its weights, respectively. In the weak-noise limit, noise added to the output of the neural networks only changes the objective function by a constant. Hence, it cannot improve generalization. Input noise introduces penalty terms in the objective function that are related to, but distinct from, those found in the regularization approaches. Simulations have been performed on a regression and a classification problem to further substantiate our analysis. Input noise is found to be effective in improving the generalization performance for both problems. However, weight noise is found to be effective in improving the generalization performance only for the classification problem. Other forms of noise have practically no effect on generalization.
Proximal Methods for Hierarchical Sparse Coding Sparse coding consists in representing signals as sparse linear combinations of atoms selected from a dictionary. We consider an extension of this framework where the atoms are further assumed to be embedded in a tree. This is achieved using a recently introduced tree-structured sparse regularization norm, which has proven useful in several applications. This norm leads to regularized problems that are difficult to optimize, and in this paper, we propose efficient algorithms for solving them. More precisely, we show that the proximal operator associated with this norm is computable exactly via a dual approach that can be viewed as the composition of elementary proximal operators. Our procedure has a complexity linear, or close to linear, in the number of atoms, and allows the use of accelerated gradient techniques to solve the tree-structured sparse approximation problem at the same computational cost as traditional ones using the l1-norm. Our method is efficient and scales gracefully to millions of variables, which we illustrate in two types of applications: first, we consider fixed hierarchical dictionaries of wavelets to denoise natural images. Then, we apply our optimization tools in the context of dictionary learning, where learned dictionary elements naturally self-organize in a prespecified arborescent structure, leading to better performance in reconstruction of natural image patches. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models.
An overview of MetaMap: historical perspective and recent advances. MetaMap is a widely available program providing access to the concepts in the unified medical language system (UMLS) Metathesaurus from biomedical text. This study reports on MetaMap's evolution over more than a decade, concentrating on those features arising out of the research needs of the biomedical informatics community both within and outside of the National Library of Medicine. Such features include the detection of author-defined acronyms/abbreviations, the ability to browse the Metathesaurus for concepts even tenuously related to input text, the detection of negation in situations in which the polarity of predications is important, word sense disambiguation (WSD), and various technical and algorithmic features. Near-term plans for MetaMap development include the incorporation of chemical name recognition and enhanced WSD.
Efficient Temporal Reasoning In The Cached Event Calculus This article deals with the problem of providing Kowalski and Sergot's event calculus, extended with context dependency, with an efficient implementation in a logic programming framework. Despite a widespread recognition that a positive solution to efficiency issues is necessary to guarantee the computational feasibility of existing approaches to temporal reasoning, the problem of analyzing the complexity of temporal reasoning programs has been largely overlooked. This article provides a mathematical analysis of the efficiency of query and update processing in the event calculus and defines a cached version of the calculus that (i) moves computational complexity from query to update processing and (ii) features an absolute improvement of performance, because query processing in the event calculus costs much more than update processing in the proposed cached version.
An extensible set-top-box architecture for interactive and broadcast services offering sophisticated user guidance Currently available set-top-boxes (STBs) are mainly used for digital TV reception. The user interface (UI) and the UI dialog of such a device usually focus on its technological aspects and to a large degree ignore the needs of the user. The impact is that the user quite often is unsatisfied when interacting with the device. Recent UI design approaches (Lonczewski and Schreiber, 1996; Lonczewski, 1997) are proving that the focus should be put on the tasks that the user can perform with the STB rather than its underlying technical capabilities. However, modern design approaches both for the STB UI as well as its underlying system architecture can be incorporated into a complete system design with reasonable effort. This paper presents the architecture of the “d-box”, the STB being used for broadcasting TV services as well as future interactive services like Internet, home-shopping etc. in Germany. The layered software architecture employs a Java Virtual Machine offering a high degree of independency of its underlying hardware. Emphasis is put upon the user-friendliness of the device by providing a uniform UI both for experts and novices, therefore allowing intuitive and easy access for different user groups
Learning long-term dependencies in segmented-memory recurrent neural networks with backpropagation of error. In general, recurrent neural networks have difficulties in learning long-term dependencies. The segmented-memory recurrent neural network (SMRNN) architecture together with the extended real-time recurrent learning (eRTRL) algorithm was proposed to circumvent this problem. Due to its computational complexity eRTRL becomes impractical with increasing network size. Therefore, we introduce the less complex extended backpropagation through time (eBPTT) for SMRNN together with a layer-local unsupervised pre-training procedure. A comparison on the information latching problem showed that eRTRL is better able to handle the latching of information over longer periods of time, even though eBPTT guaranteed a better generalisation when training was successful. Further, pre-training significantly improved the ability to learn long-term dependencies with eBPTT. Therefore, the proposed eBPTT algorithm is suited for tasks that require big networks where eRTRL is impractical. The pre-training procedure itself is independent of the supervised learning algorithm and can improve learning in SMRNN in general.
1.000845
0.001468
0.001006
0.000894
0.000842
0.000815
0.000754
0.000678
0.00043
0.000203
0.000004
0
0
0
Feature extraction using Latent Dirichlet Allocation and Neural Networks: A case study on movie synopses. Feature extraction has gained increasing attention in the field of machine learning, as in order to detect patterns, extract information, or predict future observations from big data, the urge of informative features is crucial. The process of extracting features is highly linked to dimensionality reduction as it implies the transformation of the data from a sparse highdimensional space, to higher level meaningful abstractions. This dissertation employs Neural Networks for distributed paragraph representations, and Latent Dirichlet Allocation to capture higher level features of paragraph vectors. Although Neural Networks for distributed paragraph representations are considered the state of the art for extracting paragraph vectors, we show that a quick topic analysis model such as Latent Dirichlet Allocation can provide meaningful features too. We evaluate the two methods on the CMU Movie Summary Corpus, a collection of 25,203 movie plot summaries extracted from Wikipedia. Finally, for both approaches, we use K-Nearest Neighbors to discover similar movies, and plot the projected representations using T-Distributed Stochastic Neighbor Embedding to depict the context similarities. These similarities, expressed as movie distances, can be used for movies recommendation. The recommended movies of this approach are compared with the recommended movies from IMDB, which use a collaborative filtering recommendation approach, to show that our two models could constitute either an alternative or a supplementary recommendation approach.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
QuBE++: An Efficient QBF Solver In this paper we describe QuBE++, an efficient solver for Quantified Boolean Formulas (QBFs). To the extent of our knowledge, QUBE++ is the first QBF reasoning engine that uses lazy data structures both for unit clauses propagation and for pure literals detection. QuBE++ also features non-chronological backtracking and a branching heuristic that leverages the information gathered during the backtracking phase. Owing to such techniques and to a careful implementation, QuBE++ turns out to be an efficient and robust solver, whose performances exceed those of other state-of-the-art QBF engines, and are comparable with the best engines currently available on SAT instances.
Monotone Literals and Learning in QBF Reasoning Monotone literal fixing (MLF) and learning are well-known lookahead and lookback mechanisms in propositional satisfiability (SAT). When considering Quantified Boolean Formulas (QBFs), their separate implementation leads to significant speed-ups in state-of-the-art DPLL-based solvers. This paper is dedicated to the efficient implementation of MLF in a QBF solver with learning. The interaction between MLF and learning is far from being obvious, and it poses some nontrivial questions about both the detection and the propagation of monotone literals. Complications arise from the presence of learned constraints, and are related to the question about whether learned constraints have to be considered or not during the detection and/or propagation of monotone literals. In the paper we answer to this question both from a theoretical and from a practical point of view. We discuss the advantages and the disadvantages of various solutions, and show that our solution of choice, implemented in our solver QUBE, produces significant speed-ups in most cases. Finally, we show that MLF can be fundamental also for solving some SAT instances, taken from the 2002 SAT solvers competition.
PaQuBE: Distributed QBF Solving with Advanced Knowledge Sharing In this paper we present the parallel QBF Solver PaQuBE . This new solver leverages the additional computational power that can be exploited from modern computer architectures, from pervasive multicore boxes to clusters and grids, to solve more relevant instances and faster than previous generation solvers. PaQuBE extends QuBE , its sequential core, by providing a Master/Slave Message Passing Interface (MPI) based design that allows it to split the problem up over an arbitrary number of distributed processes. Furthermore, PaQuBE 's progressive parallel framework is the first to support advanced knowledge sharing in which solution cubes as well as conflict clauses can be shared. According to the last QBF Evaluation, QuBE is the most powerful state-of-the-art QBF Solver. It was able to solve more than twice as many benchmarks as the next best independent solver. Our results here, show that PaQuBE provides additional speedup, solving even more instances, faster.
Optimizing a BDD-Based Modal Solver In an earlier work we showed how a competitive satisfiability solver for the modal logic K can be built on top of a BDD package. In this work we study optimization issues for such solvers. We focus on two types of optimizations. First we study variable ordering, which is known to be of critical importance to BDD-based algorithms. Second, we study modal extensions of the pure-literal rule. Our results show that the payoff of the variable-ordering optimization is rather modest, while the payoff of the pure-literal optimization is quite significant. We benchmark our optimized solver against both native solvers (DLP) and translation-based solvers (MSPASS and SEMPROP). Our results indicate that the BDD-based approach dominates for modally heavy formulas, while search-based approaches dominate for propositionally-heavy formulas.
Towards a Symmetric Treatment of Satisfaction and Conflicts in Quantified Boolean Formula Evaluation In this paper, we describe a new framework for evaluating Quantified Boolean Formulas (QBF). The new framework is based on the Davis-Putnam (DPLL) search algorithm. In existing DPLL based QBF algorithms, the problem database is represented in Conjunctive Normal Form (CNF) as a set of clauses, implications are generated from these clauses, and backtracking in the search tree is chronological. In this work, we augment the basic DPLL algorithm with conflict driven learning as well as satisfiability directed implication and learning. In addition to the traditional clause database, we add a cube database to the data structure. We show that cubes can be used to generate satisfiability directed implications similar to conflict directed implications generated by the clauses. We show that in a QBF setting, conflicting leaves and satisfying leaves of the search tree both provide valuable information to the solver in a symmetric way. We have implemented our algorithm in the new QBF solver Quaffle. Experimental results show that for some test cases, satisfiability directed implication and learning significantly prunes the search.
A machine program for theorem-proving The programming of a proof procedure is discussed in connection with trial runs and possible improvements.
Solving Advanced Reasoning Tasks Using Quantified Boolean Formulas We consider the compilation of different reasoning tasks into the evaluation problem of quantified boolean formulas (QBFs) as an approach to develop prototype reasoning sys- tems useful for, e.g., experimental purposes. Such a method is a natural generalization of a similar technique applied to NP-problems and has been recently proposed by other re- searchers. More specifically, we present translations of sev- eral well-known reasoning tasks from the area of nonmono- tonic reasoning into QBFs, and compare their implementa- tion in the prototype system QUIP with established NMR- provers. The results show reasonable performance, and docu- ment that the QBF approach is an attractive tool for rapid pro- totyping of experimental knowledge-representation systems.
Exploiting structure in an AIG based QBF solver In this paper we present a procedure for solving quantified boolean formulas (QBF), which uses And-Inverter Graphs (AIGs) as the core data-structure. We make extensive use of structural information extracted from the input formula such as functional definitions of variables and non-linear quantifier structures. We show how this information can directly be exploited by the symbolic, AIG based representation. We implemented a prototype QBF solver based on our ideas and performed a number of experiments proving the effectiveness of our approach, and moreover, showing that our method is able to solve QBF instances on which state-of-the-art QBF solvers known from literature fail.
Value minimization in circumscription Minimization in circumscription has focussed on minimizing the extent of a set of predicates (with or without priorities among them), or of a formula. Although functions and other constants may be left varying during circumscription, no earlier formalism to the best of our knowledge minimized functions. In this paper we introduce and motivate the notion of value minimizing a function in circumscription. Intuitively, value minimizing a function consists in choosing those models where the value of the function is minimal relative to an ordering on its range. We first give the formulation of value minimization of a single function based on a syntactic transformation and then give a formulation in model-theoretic terms. We then discuss value minimization of a set of functions with and without priorities. We show how Lifschitz's Nested Abnormality Theories can be used to express value minimization, and discuss the prospect of its use for knowledge representation, particularly in formalizing reasoning about actions. 0 1998 Published
Restricted Monotonicity A knowledge representation problem can be sometimesviewed as an element of a family of problems,with parameters corresponding to possibleassumptions about the domain under consideration.When additional assumptions are made,the class of domains that are being described becomessmaller, so that the class of conclusions thatare true in all the domains becomes larger. Asa result, a satisfactory solution to a parametricknowledge representation problem on the basis ofsome nonmonotonic...
Perceiving And Reasoning About A Changing World A rational agent (artificial or otherwise) residing in a complex changing environment must gather information perceptually, update that information as the world changes, and combing that information with causal information to reason about the changing world. Using the system of defeasible reasoning that is incorporated into the OSCAR architecture for rational agents, a set of reason-schemas is proposed for enabling an agent to perform some of the requisite reasoning. Along the way, solutions are proposed for the Frame Problem, the Qualification Problem, and the Ramification Problem. The principles and reasoning described have all been implemented in OSCAR.
Automatic Parallel I/O Performance Optimization Using Genetic Algorithms The complexity of parallel I/O systems imposes significant challenge in managing and utilizing the available system resources to meet application performance, portability and usability goals. We believe that a parallel I/O system that automatically selects efficient I/O plans for user applications is a solution to this problem. In this paper, we present such an automatic performance optimization approach for scientific applications performing collective I/O requests on multidimensional arrays. The approach is based on a high level description of the target workload and execution environment characteristics, and applies genetic algorithms to select high quality I/O plans. We have validated this approach in the Panda parallel I/O library. Our performance evaluations on the IBM SP show that this approach can select high quality I/O plans under a variety of system conditions with a low overhead, and the genetic algorithm-selected I/O plans are in general better than the default plans used in Panda.
Introduction: progress in formal commonsense reasoning This special issue consists largely of expanded and revised versions of selected papers of the Fifth International Symposium on Logical Formalizations of Commonsense Reasoning (Common Sense 2001), held at New York University in May 2001., The Common Sense Symposia, first organized in 1991 by John McCarthy and held roughly biannually since, are dedicated to exploring the development of formal commonsense theories using mathematical logic. Commonsense reasoning is a central part of human behavior; no real intelligence is possible without it. Thus, the development of systems that exhibit commonsense behavior is a central goal of Artificial Intelligence. It has proven to be more difficult to create systems that are capable of commonsense reasoning than systems that can solve "hard" reasoning problems. There are chess-playing programs that beat champions [5] and expert systems that assist in clinical diagnosis [32], but no programs that reason about how far one must bend over to put on one's socks. Part of the difficulty is the all-encompassing aspect of commonsense reasoning: any problem one looks at touches on many different types of knowledge. Moreover, in contrast to expert knowledge which is usually explicit, most commonsense knowledge is implicit. One of the prerequisites to developing commonsense reasoning systems is making this knowledge explicit. John McCarthy [25] first noted this need and suggested using formal logic to encode commonsense knowledge and reasoning. In the ensuing decades, there has been much research on the representation of knowledge in formal logic and on inference algorithms to
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.038006
0.044428
0.019048
0.016828
0.007994
0.003722
0.001091
0.000143
0.000011
0
0
0
0
0
Towards Scaling Up Classification-Based Speech Separation Formulating speech separation as a binary classification problem has been shown to be effective. While good separation performance is achieved in matched test conditions using kernel support vector machines (SVMs), separation in unmatched conditions involving new speakers and environments remains a big challenge. A simple yet effective method to cope with the mismatch is to include many different acoustic conditions into the training set. However, large-scale training is almost intractable for kernel machines due to computational complexity. To enable training on relatively large datasets, we propose to learn more linearly separable and discriminative features from raw acoustic features and train linear SVMs, which are much easier and faster to train than kernel SVMs. For feature learning, we employ standard pre-trained deep neural networks (DNNs). The proposed DNN-SVM system is trained on a variety of acoustic conditions within a reasonable amount of time. Experiments on various test mixtures demonstrate good generalization to unseen speakers and background noises.
Internet of Things (IoT): A vision, architectural elements, and future directions Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
Voice conversion using deep neural networks with layer-wise generative training This paper presents a new spectral envelope conversion method using deep neural networks (DNNs). The conventional joint density Gaussian mixture model (JDGMM) based spectral conversion methods perform stably and effectively. However, the speech generated by these methods suffer severe quality degradation due to the following two factors: 1) inadequacy of JDGMM in modeling the distribution of spectral features as well as the non-linear mapping relationship between the source and target speakers, 2) spectral detail loss caused by the use of high-level spectral features such as mel-cepstra. Previously, we have proposed to use the mixture of restricted Boltzmann machines (MoRBM) and the mixture of Gaussian bidirectional associative memories (MoGBAM) to cope with these problems. In this paper, we propose to use a DNN to construct a global non-linear mapping relationship between the spectral envelopes of two speakers. The proposed DNN is generatively trained by cascading two RBMs, which model the distributions of spectral envelopes of source and target speakers respectively, using a Bernoulli BAM (BBAM). Therefore, the proposed training method takes the advantage of the strong modeling ability of RBMs in modeling the distribution of spectral envelopes and the superiority of BAMs in deriving the conditional distributions for conversion. Careful comparisons and analysis among the proposed method and some conventional methods are presented in this paper. The subjective results show that the proposed method can significantly improve the performance in terms of both similarity and naturalness compared to conventional methods.
Wiener filtering based speech enhancement with Weighted Denoising Auto-encoder and noise classification. •Weighted reconstruction loss function is introduced to the conventional DA model.•Weighted DA is used to model the relationship between clean and noisy speech features.•GMM-based noise classification is employed to improve the generalization ability.•The size and number of hidden layer have no distinct effect on the performance of WDA.•Better speech quality is achieved with similar SNR improvement and noise reduction.
Interactive Spoken Content Retrieval by Deep Reinforcement Learning. For text content retrieval, the user can easily scan through and select from a list of retrieved items. This is impossible for spoken content retrieval, because the retrieved items are not easily displayed on-screen. In addition, due to the high degree of uncertainty for speech recognition, retrieval results can be very noisy. One way to counter such difficulties is through user-machine interactio...
Statistical Parametric Speech Synthesis Using Deep Neural Networks Conventional approaches to statistical parametric speech synthesis typically use decision tree-clustered context-dependent hidden Markov models (HMMs) to represent probability densities of speech parameters given texts. Speech parameters are generated from the probability densities to maximize their output probabilities, then a speech waveform is reconstructed from the generated parameters. This approach is reasonably effective but has a couple of limitations, e.g. decision trees are inefficient to model complex context dependencies. This paper examines an alternative scheme that is based on a deep neural network (DNN). The relationship between input texts and their acoustic realizations is modeled by a DNN. The use of the DNN can address some limitations of the conventional approach. Experimental results show that the DNN-based systems outperformed the HMM-based systems with similar numbers of parameters.
Deep Structured Energy Based Models for Anomaly Detection. In this paper, we attack the anomaly detection problem by directly modeling the data distribution with deep architectures. We propose deep structured energy based models (DSEBMs), where the energy function is the output of a deterministic deep neural network with structure. We develop novel model architectures to integrate EBMs with different types of data such as static data, sequential data, and spatial data, and apply appropriate model architectures to adapt to the data structure. Our training algorithm is built upon the recent development of score matching (Hyvärinen, 2005), which connects an EBM with a regularized autoencoder, eliminating the need for complicated sampling method. Statistically sound decision criterion can be derived for anomaly detection purpose from the perspective of the energy landscape of the data distribution. We investigate two decision criteria for performing anomaly detection: the energy score and the reconstruction error. Extensive empirical studies on benchmark tasks demonstrate that our proposed model consistently matches or outperforms all the competing methods.
Training restricted Boltzmann machines using approximations to the likelihood gradient A new algorithm for training Restricted Boltzmann Machines is introduced. The algorithm, named Persistent Contrastive Divergence, is different from the standard Contrastive Divergence algorithms in that it aims to draw samples from almost exactly the model distribution. It is compared to some standard Contrastive Divergence and Pseudo-Likelihood algorithms on the tasks of modeling and classifying various types of data. The Persistent Contrastive Divergence algorithm outperforms the other algorithms, and is equally fast and simple.
Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions An approach to semi-supervised learning is pro- posed that is based on a Gaussian random field model. Labeled and unlabeled data are rep- resented as vertices in a weighted graph, with edge weights encoding the similarity between in- stances. The learning problem is then formulated in terms of a Gaussian random field on this graph, where the mean of the field is characterized in terms of harmonic functions, and is efficiently obtained using matrix methods or belief propa- gation. The resulting learning algorithms have intimate connections with random walks, elec- tric networks, and spectral graph theory. We dis- cuss methods to incorporate class priors and the predictions of classifiers obtained by supervised learning. We also propose a method of parameter learning by entropy minimization, and show the algorithm's ability to perform feature selection. Promising experimental results are presented for synthetic data, digit classification, and text clas- sification tasks.
Training Hierarchical Feed-Forward Visual Recognition Models Using Transfer Learning from Pseudo-Tasks Building visual recognition models that adapt across different domains is a challenging task for computer vision. While feature-learning machines in the form of hierarchial feed-forward models (e.g., convolutional neural networks) showed promise in this direction, they are still difficult to train especially when few training examples are available. In this paper, we present a framework for training hierarchical feed-forward models for visual recognition, using transfer learning from pseudo tasks. These pseudo tasks are automatically constructed from data without supervision and comprise a set of simple pattern-matching operations. We show that these pseudo tasks induce an informative inverse-Wishart prior on the functional behavior of the network, offering an effective way to incorporate useful prior knowledge into the network training. In addition to being extremely simple to implement, and adaptable across different domains with little or no extra tuning, our approach achieves promising results on challenging visual recognition tasks, including object recognition, gender recognition, and ethnicity recognition.
Utilitarian Desires Autonomous agents reason frequently about preferences such as desires and goals. In this paper we propose a logic of desires with a utilitarian semantics, in which we study nonmonotonic reasoning about desires and preferences based on the idea that desires can be understood in terms of utility losses (penalties for violations) and utility gains (rewards for fulfillments). Our logic allows for a systematic study and classification of desires, for example by distinguishing subtly different ways to add up these utility losses and gains. We propose an explicit construction of the agent's preference relation from a set of desires together with different kinds of knowledge. A set of desires extended with knowledge induces a set of ‘distinguished’ utility functions by adding up the utility losses and gains of the individual desires, and these distinguished utility functions induce the preference relation.
On the Power of Deterministic Reductions to C=P The counting class C = P, which captures the notion of "exact counting," while extremely powerful under various nondeterministic reductions, is quite weak under polynomial-time deterministic reductions. We discuss the analogies between NP and co-C = P, which allow us to derive many interesting results for such deterministic reductions to co-C = P. We exploit these results to obtain some interesting oracle separations. Most importantly, we show that there exists an oracle A such that +P(A) not-subset-or-equal-to P(C=PA) and BPP(A) not-subset-or-equal-to P(C=PA). Therefore, techniques that would prove that C = P and PP are polynomial-time Turing equivalent, or that C = P is polynomial-time Turing hard for the polynomial-time hierarchy, would not relativize.
Automatic Derivation and Application of Induction Schemes for Mutually Recursive Functions This paper advocates and explores the use of multipredicate induction schemes for proofs about mutually recursive functions. The interactive application of multi-predicate schemes stemming from datatype definitions is already well-established practice; this paper describes an automated proof procedure based on multi-predicate schemes. Multipredicate schemes may be formally derived from (mutually recursive) function definitions; such schemes are often helpful in proving properties of mutually recursive functions where the recursion pattern does not follow that of the underlying datatypes. These ideas have been implemented using the HOL theorem prover and the Clam proof planner.
Integrating representation learning and skill learning in a human-like intelligent agent. Building an intelligent agent that simulates human learning of math and science could potentially benefit both cognitive science, by contributing to the understanding of human learning, and artificial intelligence, by advancing the goal of creating human-level intelligence. However, constructing such a learning agent currently requires manual encoding of prior domain knowledge; in addition to being a poor model of human acquisition of prior knowledge, manual knowledge-encoding is both time-consuming and error-prone. Previous research has shown that one of the key factors that differentiates experts and novices is their different representations of knowledge. Experts view the world in terms of deep functional features, while novices view it in terms of shallow perceptual features. Moreover, since the performance of learning algorithms is sensitive to representation, the deep features are also important in achieving effective machine learning. In this paper, we present an efficient algorithm that acquires representation knowledge in the form of “deep features”, and demonstrate its effectiveness in the domain of algebra as well as synthetic domains. We integrate this algorithm into a machine-learning agent, SimStudent, which learns procedural knowledge by observing a tutor solve sample problems, and by getting feedback while actively solving problems on its own. We show that learning “deep features” reduces the requirements for knowledge engineering. Moreover, we propose an approach that automatically discovers student models using the extended SimStudent. By fitting the discovered model to real student learning curve data, we show that it is a better student model than human-generated models, and demonstrate how the discovered model may be used to improve a tutoring system's instructional strategy.
1.028194
0.023704
0.022736
0.022222
0.022222
0.002222
0.000494
0.000117
0.000026
0.000001
0
0
0
0
Heuristics for planning with penalties and rewards formulated in logic and computed through circuits The automatic derivation of heuristic functions for guiding the search for plans is a fundamental technique in planning. The type of heuristics that have been considered so far, however, deal only with simple planning models where costs are associated with actions but not with states. In this work we address this limitation by formulating a more expressive planning model and a corresponding heuristic where preferences in the form of penalties and rewards are associated with fluents as well. The heuristic, that is a generalization of the well-known delete-relaxation heuristic, is admissible, informative, but intractable. Exploiting a correspondence between heuristics and preferred models, and a property of formulas compiled in d-DNNF, we show however that if a suitable relaxation of the domain, expressed as the strong completion of a logic program with no time indices or horizon is compiled into d-DNNF, the heuristic can be computed for any search state in time that is linear in the size of the compiled representation. This representation defines an evaluation network or circuit that maps states into heuristic values in linear-time. While this circuit may have exponential size in the worst case, as for OBDDs, this is not necessarily so. We report empirical results, discuss the application of the framework in settings where there are no goals but just preferences, and illustrate the versatility of the account by developing a new heuristic that overcomes limitations of delete-based relaxations through the use of valid but implicit plan constraints. In particular, for the Traveling Salesman Problem, the new heuristic captures the exact cost while the delete-relaxation heuristic, which is also exponential in the worst case, captures only the Minimum Spanning Tree lower bound.
Utilizing Problem Structure in Planning: A Local Search Approach
Strengthening Landmark Heuristics via Hitting Sets The landmark cut heuristic is perhaps the strongest known polytime admissible approximation of the optimal delete relaxation heuristic h+. Equipped with this heuristic, a best-first search was able to optimally solve 40% more benchmark problems than the winners of the sequential optimization track of IPC 2008. We show that this heuristic can be understood as a simple relaxation of a hitting set problem, and that stronger heuristics can be obtained by considering stronger relaxations. Based on these findings, we propose a simple polytime method for obtaining heuristics stronger than landmark cut, and evaluate them over benchmark problems. We also show that hitting sets can be used to characterize h+ and thus provide a fresh and novel insight for better comprehension of the delete relaxation.
Local search topology in planning benchmarks: an empirical analysis Many state-of-the-art heuristic planners derive their heuristic function by relaxing the planning task at hand, where the relaxation is to assume that all delete lists are empty. Looking at a collection of planning benchmarks, we measure topological properties of state spaces with respect to that relaxation. The results suggest that, given the heuristic based on the relaxation, many planning benchmarks are simple in structure. This sheds light on the recent success of heuristic planners employing local search.
The FF planning system: fast plan generation through heuristic search We describe and evaluate the algorithmic techniques that are used in the FF planning system. Like the HSP system, FF relies on forward state space search, using a heuristic that estimates goal distances by ignoring delete lists. Unlike HSP's heuristic, our method does not assume facts to be independent. We introduce a novel search strategy that combines hill-climbing with systematic search, and we show how other powerful heuristic information can be extracted and used to prune the search space. FF was the most successful automatic planner at the recent AIPS-2000 planning competition. We review the results of the competition, give data for other benchmark domains, and investigate the reasons for the runtime performance of FF compared to HSP.
The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz.
Affinity analysis of coded data sets Coded data sets are commonly used as compact representations of real world processes. Such data sets have been studied within various research fields from association mining, data warehousing, knowledge discovery, collaborative filtering to machine learning. However, previous studies on coded data sets have introduced methods for the analysis of rather small data sets. This study proposes applying information retrieval for enabling high performance analysis of data masses that scale beyond traditional approaches. Part of this PHD study focuses on new type of kernel projection functions that can be used to find similarities in spare discrete data spaces. This study presents experimental results how information retrieval indexes scale and outperform two common relational data schemas with a leading commercial DBMS for market basket analysis.
Internet of Things (IoT): A vision, architectural elements, and future directions Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
The complexity of combinatorial problems with succinct input representation Several languages for the succinct representation of the instances of combinatorial problems are investigated. These languages have been introduced in [20, 2] and [5] where it has been shown that describing the instances by these languages causes a blow-up of the complexities of some problems. In the present paper the descriptional power of these languages is compared by estimating the complexities of some combinatorial problems in terms of completeness in suitable classes of the “counting polynomial-time hierarchy” which is introduced here. It turns out that some of the languages are not comparable, unless P=NP Some problems left open in [2] are solved.
A Stable Distributed Scheduling Algorithm
Application performance and flexibility on exokernel systems The exokemel operating system architecture safely gives untrusted software efficient control over hardware and software resources by separating management from protection. This paper describes an exokemel system that allows specialized applications to achieve high performance without sacrificing the performance of unmodified UNIX programs. It evaluates the exokemel architecture by measuring end-to-end application performance on Xok, an exokernel for Intel x86-based computers, and by comparing Xok’s performance to the performance of two widely-used 4.4BSD UNIX systems (FreeBSD and OpenBSD). The results show that common unmodified UNIX applications can enjoy the benefits of exokernels: applications either perform comparably on Xok/ExOS and the BSD UNIXes, or perform significantly better. In addition, the results show that customized applications can benefit substantially from control over their resources (e.g., a factor of eight for a Web server). This paper also describes insights about the exokemel approach gained through building three different exokemel systems, and presents novel approaches to resource multiplexing.
Optimizing the Embedded Caching and Prefetching Software on a Network-Attached Storage System As the speed gap between memory and disk is so large today, caching and prefetch are critical to enterprise class storage applications, which demands high performance. In this paper, we present our study on performance of a mid-range storage server produced by the Quanta Computer Incorporation. We first analyzed the existing caching mechanism in the server and then developed a fast caching methodology to reduce the cache access latency and processing overhead of the storage controller. In addition, we proposed a new adaptive prefetch scheme reduces the average disk access time seen by the host. Via trace-driven simulation, we evaluated the performance of our new caching and adaptive prefetch schemes. Our results showed the performance improvement for the TPC-C on-line transaction benchmark.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.025
0.022222
0.015385
0.001418
0
0
0
0
0
0
0
0
0
Satisfiability Parsimoniously Reduces to the Tantrix™ Rotation Puzzle Problem Holzer and Holzer [10] proved that the Tantrix™ rotation puzzle problem is NP-complete. They also showed that for infinite rotation puzzles, this problem becomes undecidable. We study the counting version and the unique version of this problem. We prove that the satisfiability problem parsimoniously reduces to the Tantrix™ rotation puzzle problem. In particular, this reduction preserves the uniqueness of the solution, which implies that the unique Tantrix™ rotation puzzle problem is as hard as the unique satisfiability problem, and so is DP-complete under polynomial-time randomized reductions, where DP is the second level of the boolean hierarchy over NP.
The Three-Color and Two-Color TantrixTM Rotation Puzzle Problems Are NP-Complete Via Parsimonious Reductions Holzer and Holzer [M. Holzer, W. Holzer, Tantrix(TM) rotation puzzles are intractable, Discrete Applied Mathematics 144(3) (2004) 345-358] proved that the Tantrix(TM) rotation puzzle problem with four colors is NP-complete, and they showed that the infinite variant of this problem is undecidable. In this paper, we study the three-color and two-color Tantrix(TM) rotation puzzle problems (3-TRP and 2-TRP) and their variants. Restricting the number of allowed colors to three (respectively, to two) reduces the set of available Tantrix(TM) tiles from 56 to 14 (respectively, to 8). We prove that 3-TRP and 2-TRP are NP-complete, which answers a question raised by Holzer and Holzer [M. Holzer, W. Holzer, Tantrix(TM) rotation puzzles are intractable, Discrete Applied Mathematics 144(3) (2004) 345-358] in the affirmative. Since our reductions are parsimonious, it follows that the problems Unique-3-TRP and Unique-2-TRP are DP-complete under randomized reductions. We also show that the another-solution problems associated with 4-TRP, 3-TRP, and 2-TRP are NP-complete. Finally, we prove that the infinite variants of 3-TRP and 2-TRP are undecidable.
The Three-Color And Two-Color Tantrix (Tm) Rotation Puzzle Problems Are Np-Complete Via Parsimonious Reductions Holzer and Holzer [M. Holzer, W. Holzer, Tantrix (TM) rotation puzzles are intractable, Discrete Applied Mathematics 144 (3) (2004) 345-358] proved that the Tantrix (TM) rotation puzzle problem with four colors is NP-complete, and they showed that the infinite variant of this problem is undecidable. In this paper, we study the three-color and two-color Tantrix (TM) rotation puzzle problems (3-TRP and 2-TRP) and their variants. Restricting the number of allowed colors to three (respectively, to two) reduces the set of available Tantrix (TM) tiles from 56 to 14 (respectively, to 8). We prove that 3-TRP and 2-TRP are NP-complete, which answers a question raised by Holzer and Holzer I M. Holzer, W. Holzer, Tantrix (TM) rotation puzzles are intractable, Discrete Applied Mathematics 144 (3) (2004) 345-358] in the affirmative. Since our reductions are parsimonious, it follows that the problems Unique-3-trp and Unique-2-TRP are DP-complete under randomized reductions. We also show that the another-solution problems associated with 4-TRP, 3-TRP, and 2-TRP are NP-complete. Finally, we prove that the infinite variants of 3-TRP and 2-TRP are undecidable. (C) 2009 Elsevier Inc, All rights reserved.
On unique satisfiability and the threshold behavior of randomized reductions The research presented in this paper is motivated by the following new results on the com- plexity of the unique satisfiability problem, USAT. • if USAT ≡Pm USAT, then DP = co-D P and PH collapses. • if USAT ∈ co-DP, then PH collapses. • if USAT has OR!, then PH collapses. The proofs of these results use only the fact that USAT is complete for DP under randomized reductions—even though the probability bound of these reductions may be low. Furthermore, these results show that the structural complexity of USAT and of DP many-one complete sets are very similar, and so they lend support to the argument that even sets complete under "weak" randomized reductions can capture the properties of the many-one complete sets. However, under these "weak" randomized reductions, USAT is complete for PSAT(log n) as well, and in this case, USAT does not capture the properties of the sets many-one complete for PSAT(log n). To explain this anomaly, the concept of the threshold behavior of randomized reductions is developed. Tight bounds on the thresholds are shown for NP, co-NP, DPand co-DP. Furthermore, these results can be generalized to give upper and lower bounds on the thresholds for the Boolean Hierarchy. These upper bounds are expressed in terms of Fibonacci numbers.
Saving queries with randomness In this paper, we investigate the power of randomness to save a query to an NP-complete set. We show that the P SAT ∥ [ k ] ≤ p m -complete language randomly reduces to a language in P SAT ∥ [ k − 1] with a one-sided error probability of 1/⌈ k /2⌉ or a two sided-error probability of 1/( k +1). Furthermore, we prove that these probability bounds are tight; i.e., they cannot be improved by 1/poly, unless PH collapses. We also obtain tight performance bounds for randomized reductions between nearby classes in the Boolean and bounded query hierarchies. These bounds provide probability thresholds for completeness under randomized reductions in these classes. Using these thresholds, we show that certain languages in the Boolean hierarchy which are not ≤ p m -complete in some relativized worlds, nevertheless inherit many of the hardness properties associated with the ≤ p m -complete languages. Finally, we explore the relationship between randomization and functions that are computable using bounded queries to SAT. For any function h ( n ) = O (log n ), we show that there is a function f computable using h ( n ) nonadaptive queries to SAT, which cannot be computed correctly with probability 1/2 + 1/poly by any randomized machine which makes less than h ( n ) adaptive queries to any oracle, unless PH collapses.
On linear characterizations of combinatorial optimization problems We show that there can be no computationally tractable description by linear inequalities of the polyhedron associated with any NP-complete combinatorial optimization problem unless NP = co-NP -- a very unlikely event. We also apply the ellipsoid method for linear programming to show that a combinatorial optimization problem is solvable in polynomial time if and only if it admits a small generator of violated inequalities.
The Minimization Problem for Boolean Formulas More than a quarter of a century ago, the question of the complexity of determining whether a given Boolean formula is minimal motivated Meyer and Stockmeyer to define the polynomial hierarchy. This problem (in the standard formalized version---that of Garey and Johnson) has been known for decades to be coNP-hard and in NPNP, and yet no one had even been able to establish (many-one) NP-hardness. In this paper, we show that and more: The problem in fact is (many-one) hard for parallel access to NP.
Banishing Robust Turing Completeness ABSTRACT This paper proves that \promise classes" are so fragilely structured that they do not robustly (i.e. with respect to all oracles) possess Turinghard sets even in classes far larger than themselves. In particular, this paper shows that FewP does not robustly possess Turing hard sets for UP \ coUP and IP \ coIP does not robustly possess Turing hard sets for ZPP. It follows that ZPP, R, coR, UP\coUP, UP, FewP\coFewP, FewP, and IP \ coIP do not robustly possess Turing complete sets. This both resolves open questions of whether promise classes lacking robust downward closure under Turing reductions (e.g., R, UP, FewP) might robustly have Turing complete sets, and extends the range of classes known not to robustly contain many-one complete sets. Keywords: Structural complexity theory; Polynomial-time reductions;
Algorithms on strings, trees, and sequences: computer science and computational biology
HFS: a performance-oriented flexible file system based on building-block compositions The Hurricane File System (HFS) is designed for (potentially large-scale) shared-memory multiprocessors. Its architecture is based on the principle that, in order to maximize performance for applications with diverse requirements, a file system must support a wide variety of file structures, file system policies, and I/O interfaces. Files in HFS are implemented using simple building blocks composed in potentially complex ways. This approach yields great flexibility, allowing an application to customize the structure and policies of a file to exactly meet its requirements. As an extreme example, HFS allows a file's structure to be optimized for concurrent random-access write-only operations by 10 threads, something no other file system can do. Similarly, the prefetching, locking, and file cache management policies can all be chosen to match an application's access pattern. In contrast, most parallel file systems support a single file structure and a small set of policies. We have implemented HFS as part of the Hurricane operating system running on the Hector shared-memory multiprocessor. We demonstrate that the flexibility of HFS comes with little processing or I/O overhead. We also show that for a number of file access patterns, HFS is able to deliver to the applications the full I/O bandwidth of the disks on our system.
Causal theories of action: a computational core We propose a framework for simple causal theories of action, and study the computational complexity in it of various reasoning tasks such as determinism, progression and regression under various assumptions. As it turned out, even the simplest one among them, one-step temporal projection with complete initial state, is intractable. We also briefly consider an extension of the framework to allow truly indeterministic actions, and find that this extension does not increase the complexity of any of the tasks considered here.
Towards Self-Tuning Memory Management for Data Servers Although today's computers provide huge amounts of main memory, the ever-increasing load of large data servers, imposed by resource-intensive decision-support queries and accesses to multimedia and other complex data, often leads to memory contention and may result in severe performance degrada- tion. Therefore, careful tuning of memory mangement is crucial for heavy-load data servers. This paper gives an overview of self-tuning methods for a spectrum of memory management issues, ranging from traditional caching to exploiting distributed memory in a server cluster and speculative prefetching in a Web-based system. The common, fundamental elements in these methods include on-line load track- ing, near-future access prediction based on stochastic models and the available on-line statistics, and dynamic and automatic adjustment of control parameters in a feedback loop.
Improving the tolerance of multilayer perceptrons by minimizing the statistical sensitivity to weight deviations This paper proposes a version of the backpropagation algorithm which increases the tolerance of a feedforward neural network against deviations in the weight values. These changes can originate either when the neural network is mapped on a given VLSI circuit where the precision and/or weight matching are low, or by physical defects affecting the neural circuits. The modified backpropagation algorithm we propose uses the statistical sensitivity of the network to changes in the weights as a quantitative measure of network tolerance and attempts to reduce this statistical sensitivity while keeping the figures for the usual training performance (in errors and time) similar to those obtained with the usual backpropagation algorithm.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.1522
0.178173
0.178173
0.075356
0.00549
0.000768
0.000326
0.000006
0
0
0
0
0
0
Transfer learning method using multi-prediction deep Boltzmann machines for a small scale dataset In this article, we propose a transfer learning method using the multi-prediction deep Boltzmann machine (MPDBM). In recent years, deep learning has been widely used in many applications such as image classification and object detection. However, it is hard to apply a deep learning method to medical images because the deep learning method needs a large number of training data to train the deep neural network. Medical image datasets such as X-ray CT image datasets do not have enough training data because of privacy. In this article, we propose a method that re-uses the network trained on non-medical images (source domain) to improve performance even if we have a small number of medical images (target domain). Our proposed method firstly trains the deep neural network for solving the source task using the MPDBM. Secondly, we evaluate the relation between the source domain and the target domain. To evaluate the relation, we input the target domain into the deep neural network trained on the source domain. Then, we compute the histograms based on the response of the output layer. After computing the histograms, we select the variables of the output layer corresponding to the target domain. Then, we tune the parameters in such a way that the selected variables respond as the outputs of the target domain. In this article, we use the MNIST dataset as the source domain and the lung dataset of the X-ray CT images as the target domain. Experimental results show that our proposed method can improve classification performance.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Partial weighted MaxSAT for optimal planning We consider the problem of computing optimal plans for propositional planning problems with action costs. In the spirit of leveraging advances in general-purpose automated reasoning for that setting, we develop an approach that operates by solving a sequence of partial weighted MaxSAT problems, each of which corresponds to a step-bounded variant of the problem at hand. Our approach is the first SAT-based system in which a proof of cost-optimality is obtained using a MaxSAT procedure. It is also the first system of this kind to incorporate an admissible planning heuristic. We perform a detailed empirical evaluation of our work using benchmarks from a number of International Planning Competitions.
A Complete Algorithm for Generating Landmarks.
Pruning Methods for Optimal Delete-Free Planning.
Strengthening Landmark Heuristics via Hitting Sets The landmark cut heuristic is perhaps the strongest known polytime admissible approximation of the optimal delete relaxation heuristic h+. Equipped with this heuristic, a best-first search was able to optimally solve 40% more benchmark problems than the winners of the sequential optimization track of IPC 2008. We show that this heuristic can be understood as a simple relaxation of a hitting set problem, and that stronger heuristics can be obtained by considering stronger relaxations. Based on these findings, we propose a simple polytime method for obtaining heuristics stronger than landmark cut, and evaluate them over benchmark problems. We also show that hitting sets can be used to characterize h+ and thus provide a fresh and novel insight for better comprehension of the delete relaxation.
On the compilability and expressive power of propositional planning formalisms The recent approaches of extending the GRAPHPLAN algorithm to handle more expressive planning formalisms raise the question of what the formal meaning of "expressive power" is. We formalize the intuition that expressive power is a measure of how concisely planning domains and plans can be expressed in a particular formalism by introducing the notion of "compilation schemes" between planning formalisms. Using this notion, we analyze the expressiveness of a large family of propositional planning formalisms, ranging from basic STRIPS to a formalism with conditional effects, partial state specifications, and propositional formulae in the preconditions. One of the results is that conditional effects cannot be compiled away if plan size should grow only linearly but can be compiled away if we allow for polynomial growth of the resulting plans. This result confirms that the recently proposed extensions to the GRAPHPLAN algorithm concerning conditional effects are optimal with respect to the "compilability" framework. Another result is that general propositional formulae cannot be compiled into conditional effects if the plan size should be preserved linearly. This implies that allowing general propositional formulae in preconditions and effect conditions adds another level of difficulty in generating a plan.
The computational complexity of propositional STRIPS planning I present several computational complexity results for propositional STRIPS planning, i.e.,STRIPS planning restricted to ground formulas. Different planning problems can be definedby restricting the type of formulas, placing limits on the number of pre- and postconditions,by restricting negation in pre- and postconditions, and by requiring optimal plans. For thesetypes of restrictions, I show when planning is tractable (polynomial) and intractable (NPhard). In general, it is...
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
On the facial structure of set packing polyhedra In this paper we address ourselves to identifying facets of the set packing polyhedron, i.e., of the convex hull of integer solutions to the set covering problem with equality constraints and/or constraints of the form “?”. This is done by using the equivalent node-packing problem derived from the intersection graph associated with the problem under consideration. First, we show that the cliques of the intersection graph provide a first set of facets for the polyhedron in question. Second, it is shown that the cycles without chords of odd length of the intersection graph give rise to a further set of facets. A rather strong geometric property of this set of facets is exhibited.
Dependent Fluents We discuss the persistence of the indirect ef­ fects of an action—the question when such ef­ fects are subject to the commonsense law of in­ ertia, and how to describe their evolution in the cases when inertia does not apply. Our model of nonpersistent effects involves the assumption that the value of the fluent in question is deter­ mined by the values of other fluents, although the dependency may be partially or completely unknown. This view leads us to a new high- level action language ARD (for Actions, Ram­ ifications and Dependencies) that is capable of describing both persistent and nonpersistent ef­ fects. Unlike the action languages introduced in the past, ARD is "non-Markovia n," in the sense that the evolution of the fluents described in this language may depend on their history, and not only on their current values.
Monotonic reductions, representative equivalence, and compilation of intractable problems The idea of preprocessing part of the input of a problem in order to improve efficiency has been employed by several researchers in several areas of computer science. In this article, we show sufficient conditions to prove that an intractable problem cannot be efficiently solved even allowing an exponentially long preprocessing phase. The generality of such conditions is shown by applying them to various problems coming from different fields. While the results may seem to discourage the use of compilation, we present some evidence that such negative results are useful in practice.
Circuit definitions of nondeterministic complexity classes We consider restictions on Boolean circuits and use them to obtain new uniform circuit characterizations of nondeterministic space and time classes. We also obtain characterizations of counting classes based on nondeterministic time bounded computations on the arithmetic circuit model. It is shown how the notion of semiunboundedness unifies the definitions of many natural complexity classes.
Reasoning About Actions in Narrative Understanding Reasoning about actions has been a focus of interest in AI from the beginning and continues to receive attention. Rut the range of situations considered has been rather narrow and falls well short of what is needed for understanding natural language. Language understanding requires sophisticated reasoning about actions and events and the world's languages employ a variety of grammatical and lexical devices to construe, direct attention and focus on, and control inferences about actions and events. We implemented a neurally inspired computational model that is able to reason about, linguistic action and event descriptions, such as those found in news stories. The system uses an active. event representation that also seems to provide natural and cognitiveIy motivated solutions to classical problems in logical theories of reasoning about actions. For logical approaches to reasoning about actions, we suggest, that looking at story understanding sets up fairly strong desiderata both in terms of the fine-grained event and action distinctions and the kinds of real-time inferences required.
Scheduling parallel I/O operations The I/O bottleneck in parallel computer systems has recently begun receiving increasing interest. Most attention has focused on improving the performance of I/O devices using fairly low-level parallelism in techniques such as disk striping and interleaving. Widely applicable solutions, however, will require an integrated approach which addresses the problem at multiple system levels, including applications, systems software, and architecture. We propose that within the context of such an integrated approach, scheduling parallel I/O operations will become increasingly attractive and can potentially provide substantial performance benefits.We describe a simple I/O scheduling problem and present approximate algorithms for its solution. The costs of using these algorithms in terms of execution time, and the benefits in terms of reduced time to complete a batch of I/O operations, are compared with the situations in which no scheduling is used, and in which an optimal scheduling algorithm is used. The comparison is performed both theoretically and experimentally. We have found that, in exchange for a small execution time overhead, the approximate scheduling algorithms can provide substantial improvements in I/O completion times.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.1
0.025
0.025
0.011111
0.005
0.000229
0
0
0
0
0
0
0
0
Destage Algorithms for Disk Arrays with Nonvolatile Caches In a disk array with a nonvolatile write cache, destages from the cache to the disk are performed in the background asynchronously while read requests from the host system are serviced in the foreground. In this paper, we study a number of algorithms for scheduling destages in a RAID-5 system. We introduce a new scheduling algorithm, called linear threshold scheduling, that adaptively varies the rate of destages to disks based on the instantaneous occupancy of the write cache. The performance of the algorithm is compared with that of a number of alternative scheduling approaches, such as least-cost scheduling and high/low mark. The algorithms are evaluated in terms of their effectiveness in making destages transparent to the servicing of read requests from the host, disk utilization, and their ability to tolerate bursts in the workload without causing an overflow of the write cache. Our results show that linear threshold scheduling provides the best read performance of all the algorithms compared, while still maintaining a high degree of burst tolerance. An approximate implementation of the linear-threshold scheduling algorithm is also described. The approximate algorithm can be implemented with much lower overhead, yet its performance is virtually identical to that of the ideal algorithm.
An Adaptive High-Low Water Mark Destage Algorithm for Cached RAID5 The High-Low Water Mark destage (HLWM) algorithmis widely used to enable a cached RAID5to flush dirty datafrom its write cache to disks. It activates and deactivates adestaging process based on two time-invariant thresholdswhich are determined by cache occupancy levels. However, the opportunity exists to improve I/O throughput byadaptively changing the thresholds. This paper proposesan adaptive HLWM algorithm which dynamically changesits thresholds according to a varying I/O workload. Twothresholds are defined as the multiplication of changingrates of the cache occupancy level and the time requiredto fill and empty the cache. Performance evaluations with acached RAID5 simulator reveal that the proposed algorithmoutperforms the HLWM algorithm in terms of read responsetime, write cache hit ratio, and disk utilization.
Availability in the Sprite distributed file system In the Sprite environment, tolerating faults means recovering from them quickly. Our position is that performance and availability are the desired features of the typical locally-distributed office/engineering environment, and that very fast server recovery is the most cost-effective way of providing such availability. Mechanisms used for reliability can be inappropriate in systems with the primary goal of performance, and some availability-oriented methods using replicated hardware or processes cost too much for these systems. In contrast, availability via fast recovery need not slow down a system, and our experience in Sprite shows that in some cases the same techniques that provide high performance also provide fast recovery. In our first attempt to reduce file server recovery times to less than 90 seconds, we take advantage of the distributed state already present in our file system, and a high-performance log-structured file system currently under implementation. As a long-term goal, we hope to reduce recovery to 10 seconds or less.
On Variable Scope of Parity Protection in Disk Arrays In a common form of a RAID 5 architecture, data is organized on a disk array consisting of N + 1 disks into stripes of N data blocks and one parity block (with parity block locations staggered so as to balance the number of parity blocks on each disk). This allows data to be recovered in the event of a single disk failure. Here we consider an extension to this architecture in which parity information applies to arbitrary subsets of the data blocks in each stripe. Using several simplifying assumptions, we present simulation and analytic results that provide estimates of the improvement using this approach, in terms of total I/O operations, as compared to 1) conventional RAID 5 under a random single-block write workload, and 2) the use of a log-structured file system in which data is written out in stripes. Results on the reduction of disk recovery costs are also presented.
Volume Managers in Linux A volume manager is a subsystem for online disk storage management which has become a de-facto standard across UNIX implementations and is a serious enabler for Linux in the enterprise computing area. It adds an additional layer between the physical peripherals and the I/O interface in the kernel to present a logical view of disks, unlike current partition schemes where disks are divided into fixed-size sections.In addition to providing a logical level of management, a volume manager will often implement one or more levels of software RAID to improve performance or reliability. Advanced logical management tools and software RAID axe the specialties of the Logical Volume Manager (LVM) and Multiple. Devices (MD) drivers respectively. These are the two most widely used Linux volume managers today.This paper describes the current technologies available in Linux and new work in the area of volume management.
Quickly finding near-optimal storage designs Despite the importance of storage in enterprise computer systems, there are few adequate tools to design and configure a storage system to meet application data requirements efficiently. Storage system design involves choosing the disk arrays to use, setting the configuration options on those arrays, and determining an efficient mapping of application data onto the configured system. This is a complex process because of the multitude of disk array configuration options, and the need to take into account both capacity and potentially contending I/O performance demands when placing the data. Thus, both existing tools and administrators using rules of thumb often generate designs that are of poor quality.This article presents the Disk Array Designer (DAD), which is a tool that can be used both to guide administrators in their design decisions and to automate the design process. DAD uses a generalized best-fit bin packing heuristic with randomization and backtracking to search efficiently through the huge number of possible design choices. It makes decisions using device models that estimate storage system performance. We evaluate DAD's designs based on traces from a variety of database, filesystem, and e-mail workloads. We show that DAD can handle the difficult task of configuring midrange and high-end disk arrays, even with complex real-world workloads. We also show that DAD quickly generates near-optimal storage system designs, improving in both speed and quality over previous tools.
The LOCKSS peer-to-peer digital preservation system The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent Web caches that cooperate to detect and repair damage to their content by voting in “opinion polls.” Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection to ensure that even some very powerful adversaries attacking over many years have only a small probability of causing irrecoverable damage before being detected.
New Efficient MDS Array Codes for RAID Part II: Rabin-Like Codes for Tolerating Multiple (greater than or equal to 4) Disk Failures A new class of Binary Maximum Distance Separable (MDS) array codes which are based on circular permutation matrices are introduced in this paper. These array codes are used for tolerating multiple (greater than or equal to 4) disk failures in Redundant Arrays of Inexpensive Disks (RAID) architecture. The size of the information part is m \times n, where n is the number of information disks and (m+1) is a prime integer; the size of the parity-check part is m \times r, the minimum distance is r+1, and the number of parity-check disks is r. In practical applications, m can be very large and n ranges from 20 to 50. The code rate is R = {\frac{n}{n+r}}. These codes can be used for tolerating up to r disk failures, with very fast encoding and decoding. The complexities of encoding and decoding algorithms are O(rmn) and O(m^3r^4), respectively. When r=4, there need to be 9mn XOR operations for encoding and (9n+95)(m+1) XOR operations for decoding.
Exploiting In-Memory and On-Disk Redundancy to Conserve Energy in Storage Systems Today's storage system places an imperative demand on energy efficiency. Storage system often places disks into standby mode by stopping them from spinning to conserve energy when load is not high. The major obstacle of this method is by introducing a high spin-up cost introduced by passively waking up the standby disk to service the request. In this paper, we propose a redundancy-based, hierarchical I/O cache architecture called RIMAC to solve the problem. The idea of RIMAC is to enable data on the standby disk(s) to be recovered by accessing two-level I/O cache and/or active disks if needed. In parity-based redundant disk arrays, RIMAC exploits parity redundancy to dynamically XOR-reconstruct data being accessed toward standby disk(s) at both cache and disk levels. By avoiding passive spin-ups, RIMAC can significantly improve both energy efficiency and performance. We evaluated RIMAC by augmenting a validated storage system simulator disksim and tested four real-life server traces including HP's cello99, TPC-D, OLTP and SPC's search engine. Comprehensive results indicate RIMAC is able to reduce energy consumption by up to 18% and simultaneously improve the average response time by up to 34% in a small-scale RAID-5 system compared with threshold-based power management schemes.
Performance and Cost Comparison of Mirroring- and Parity-Based Reliability Schemes for Video Servers
Self-tuning wireless network power management Current wireless network power management often substantially degrades performance and may even increase overall energy usage when used with latency-sensitive applications. We propose self-tuning power management (STPM) that adapts its behavior to the access patterns and intent of applications, the characteristics of the network interface, and the energy usage of the platform. We have implemented STPM as a Linux kernel module--our results show substantial benefits for distributed file systems, streaming audio, and thin-client applications. Compared to default 802.11b power management, STPM reduces the total energy usage of an iPAQ running the Coda distributed file system by 21% while also reducing interactive file system delay by 80%. Further, STPM adapts to diverse operating conditions: it yields good results on both laptops and handhelds, supports 802.11b network interfaces with substantially different characteristics, and performs well across a range of application network access patterns.
The complexity of acyclic conjunctive queries This paper deals with the evaluation of acyclic Booleanconjunctive queries in relational databases. By well-known resultsof Yannakakis[1981], this problem is solvable in polynomial time;its precise complexity, however, has not been pinpointed so far. Weshow that the problem of evaluating acyclic Boolean conjunctivequeries is complete for LOGCFL, the class of decision problems thatare logspace-reducible to a context-free language. Since LOGCFL iscontained in AC1 and NC2, the evaluation problem of acyclic Booleanconjunctive queries is highly parallelizable. We present a paralleldatabase algorithm solving this problem with alogarithmic number ofparallel join operations. The algorithm is generalized to computingthe output of relevant classes of non-Boolean queries. We also showthat the acyclic versions of the following well-known database andAI problems are all LOGCFL-complete: The Query Output Tuple problemfor conjunctive queries, Conjunctive Query Containment, ClauseSubsumption, and Constraint Satisfaction. The LOGCFL-completenessresult is extended to the class of queries of bounded tree widthand to other relevant query classes which are more general than theacyclic queries.
Partial Disk Failures: Using Software to Analyze Physical Damage A good understanding of disk failures is crucial to ensure a reliable storage of data. There have been numerous studies characterizing disk failures under the common assumption that failed disks are generally unusable. Contrary to this assumption, partial disk failures are very common, e.g., caused by a head crash resulting in a small number of inaccessible disk sectors. Nevertheless, the damage can sometimes be catastrophic if the file system meta-data were among the affected sectors. As disk density rapidly increases, the likelihood of losing data also rises. This paper describes our experience in analyzing partial disk failures using the physical locations of damaged disk sectors to assess the extent and characteristics of the damage on disk platter surfaces. Based on our findings, we propose several fault-tolerance techniques to proactively guard against permanent data loss due to partial disk failures.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.015773
0.015393
0.012148
0.010413
0.00639
0.004495
0.002003
0.000744
0.000129
0.00004
0.000002
0
0
0
A heuristic load balancing scheduling method for dedicated machine constraint The dedicated machine constraint for the photolithography process in semiconductor manufacturing is one of the new challenges introduced in photolithography machinery due to natural bias. With this constraint, the wafers passing through each photolithography process have to be processed on the same machine. The purpose of the limitation is to prevent the impact of natural bias. However, many scheduling policies or modeling methods proposed by previous research for the semiconductor manufacturing system have not discussed the dedicated machine constraint. We propose the Load Balancing (LB) scheduling method based on a Resource Schedule and Execution Matrix (RSEM) to tackle this constraint. The LB method uses the RSEM as a tool to represent the temporal relationship between the wafer lots and machines. The LB method is to schedule each wafer lot at the first photolithography stage to a suitable machine according to the load balancing factors among photolithography machines. In the paper, we present an example to demonstrate the LB method and the result of the simulations to validate our method.
Load Balancing Among Photolithography Machines in the Semiconductor Manufacturing System We propose a Load Balancing (LB) scheduling approach to tackle the load balancing issue in the semiconductor manufacturing system. This issue is derived from the dedicated photolithography machine constraint. The constraint of having a dedicated machine for the photolithography process in semiconductor manufacturing is one of the new challenges introduced in photolithography machinery due to natural bias. To prevent the impact of natural bias, the wafer lots passing through each photolithography process have to be processed on the same machine. However, the previous research for the semiconductor manufacturing production has not addressed the load balancing issue and dedicated photolithography machine constraint. In this paper, along with providing the LB approach to the issue, we also present a novel model, Resource Schedule and Execution Matrix (RSEM) - the representation and manipulation methods for the task process patterns. The advantage of the proposed approach is to easily schedule the wafer lots by using a simple two-dimensional matrix. We also present the simulation results to validate our approach.
Simulation-based solution of load-balancing problems in the photolithography area of a semiconductor wafer fabrication facility In this paper we present the results of a simulation study for the solution of load-balancing problems in a semiconductor wafer fabrication facility. In the bottleneck area of photolithography the steppers form several different subgroups. These subgroups differ, for example, in the size of the masks that have to be used for processing lots on the steppers of a single group. During lot release it is necessary to distribute the lots over the different stepper groups in such a way that global targets like cycle time minimization, the maximization of the number of finished lots and due date performance are inside a certain range. We present a simulation model of a wafer fab that models the photolithography area in a detailed manner. By means of this simulation model it is possible to decide at release time on which stepper subgroup processing of the lots of a certain product is favorable.
A rapid modeling technique for measurable improvements in factory performance This paper discusses a methodology for quickly investigating problem areas in semiconductor wafer fabrication factories by creating a model for the production area of interest only (as opposed to a model of the complete factory operation). All other factory operations are treated as “black boxes”. Specific assumptions are made to capture the effect of re-entrant flow. This approach allows a rapid response to production questions when beginning a new simulation project. The methodology was applied to a cycle-time and capacity analysis of the photolithography operation for Siemens' Dresden wafer fab. The results of this simulation study are presented
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days.
Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented.
Indexing By Latent Semantic Analysis
Disk Shadowing Disk shadowing is a technique for maintaining a set of two or more identical disk images on separate disk devices. Its primary purpose is to enhance reliability and availability of secondary storage by providing multiple paths to redundant data. However, shadowing can also boost I/O performance. In this paper, we contend that intelligent device scheduling of shadowed discs increases the I/O rate by allowing parallel reads and by substantially reducing the average seek time for random reads. In particular, we develop and analytic model which shows that the seek time for a random read in a shadow set is a monotonic decreasing function of the number of disks.
Fine-Grained Mobility in the Emerald System (Extended Abstract)
Representing actions in logic programs and default theories a situation calculus approach We address the problem of representing common sense knowledge about action domains in the formalisms of logic programming and default logic. We employ a methodology proposed by Gelfond and Lifschitz which involves first defining a high-level language for representing knowledge about actions, and then specifying a translation from the high-level action language into a general-purpose formalism, such as logic programming. Accordingly, we define a high-level action languageAE, and specify sound and complete translations of portions ofAEinto logic programming and default logic. The languageAEincludes propositions that represent “static causal laws” of the following kind: a fluent formula ψ can be made true by making a fluent formula true (or, more precisely, ψ is caused whenever is caused). Such propositions are more expressive than the state constraints traditionally used to represent background knowledge. Our translations ofAEdomain descriptions into logic programming and default logic are simple, in part because the noncontrapositive nature of causal laws is easily reflected in such rule-based formalisms.
The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms A fundamental challenge in improving file system performance is to design effective block replacement algorithms to minimize buffer cache misses. Despite the well-known interactions between prefetching and caching, almost all buffer cache replacement algorithms have been proposed and studied comparatively, without taking into account file system prefetching, which exists in all modern operating systems. This paper shows that such kernel prefetching can have a significant impact on the relative performance in terms of the number of actual disk I/Os of many well-known replacement algorithms; it can not only narrow the performance gap but also change the relative performance benefits of different algorithms. Moreover, since prefetching can increase the number of blocks clustered for each disk I/O and, hence, the time to complete the I/O, the reduction in the number of disk I/Os may not translate into proportional reduction in the total I/O time. These results demonstrate the importance of buffer caching research taking file system prefetching into consideration and comparing the actual disk I/Os and the execution time under different replacement algorithms.
Representing the Process Semantics in the Event Calculus In this paper we shall present a translation of the process semantics [5] to the event calculus. The aim is to realize a method of integrating high-level semantics with logical calculi to reason about continuous change. The general translation rules and the soundness and completeness theorem of the event calculus with respect to the process semantics are main technical results of this paper.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.10208
0.0208
0.018838
0.000618
0
0
0
0
0
0
0
0
0
0
Ramification and causality in a modal action logic The paper presents a logic for action theory based on a modal language, where modalities represent actions. The frame problem is tackled by using a nonmonotonic formalism which maximizes persistency assumptions. The problem of ramification is tackled by introducing a modal causality operator which is used to represent causal rules. Assumptions on the value of fluents in the initial state allow rea...
Iterative belief revision in extended logic programming Extended logic programming augments conventional logic programming with both default and explicit negation. Several semantics for extended logic programs have been proposed that extend the well-founded semantics for logic programs with default negation (called normal programs). We show that two of these extended semantics are intractable; both Dung's grounded argumentation semantics and the well-founded semantics of Alferes et al. are NP-hard. Nevertheless, we also show that these two semantics have a common core, a more restricted form of the grounded semantics, which is tractable and can be computed iteratively in quadratic time. Moreover, this semantics is a representative of a rich class of tractable semantics based on a notion of iterative belief revision.
Programming Rational Agents in a Modal Action Logic In this paper we describe a language for reasoning about actions that can be used for modelling and for programming rational agents. We propose a modal approach for reasoning about dynamic domains in a logic programming setting. Agent behavior is specified by means of complex actions which are defined using modal inclusion axioms. The language is able to handle knowledge producing actions as well as actions which remove information. The problem of reasoning about complex actions with incomplete knowledge is tackled and the temporal projection and planning problems is addressed; more specifically, a goal directed proof procedure is defined, which allows agents to reason about complex actions and to generate conditional plans. We give a non-monotonic solution for the frame problem by making use of persistency assumptions in the context of an abductive characterization. The language has been used for implementing an adaptive web-based system.
Frame problem in dynamic logic This paper provides a formal analysis on the solutions of theframe problem by using dynamic logic. We encode Pednault's syntax-based solution , Baker's state-minimization policy, and Gelfond & Lifchitz's Action LanguageA in the propositional dynamic logic (PDL). The formal relationships among these solutions are given. The r esults of the paper show that dy- namic logic, as one of the formalisms for reasoning about dyn amic domains, can be used as a formal tool for comparing, analyzing and unifying logics ofaction.
Modal Tableaux for Reasoning About Actions and Plans In this paper we investigate tableau proof procedures for reasoning about actions and plans. Our framework is a multimodal language close to that of propositional dynamic logic, wherein we solve the frame problem by introducing the notion of dependence as a weak causal connection between actions and atoms. The tableau procedure is sound and complete for an important fragment of our language, within which all standard problems of reasoning about actions can be expressed, in particular planning...
Extended stable semantics for normal and disjunctive programs
What is planning in the presence of sensing? Despite the existence of programs that are able to generate so-called conditional plans, there has yet to emerge a clear and general specification of what it is these programs are looking for: what exactly is a plan in this setting, and when is it correct? In this paper, we develop and motivate a specification within the situation calculus of conditional and iterative plans over domains that include binary sensing actions. The account is built on an existing theory of action which includes a solution to the frame problem, and an extension to it that handles sensing actions and the effect they have on the knowledge of a robot. Plans are taken to be programs in a new simple robot program language, and the planning task is to find a program that would be known by the robot at the outset to lead to a final situation where the goal is satisfied. This specification is used to analyze the correctness of a small example plan, as well as variants that have redundant or missing sensing actions. We also investigate whether the proposed robot program language is powerful enough to serve for any intuitively achievable goal.
A Logic for Planning under Partial Observability We propose an epistemic dynamic logic EDL able to repre- sent the interactions between action and knowledge that are fundamental to planning under partial observability. EDL en- ables us to represent incomplete knowledge, nondeterministic actions, observations, sensing actions and conditional plans; it also enables a logical expression of several frequently made assumptions about the nature of the domain, such as deter- minism, full observability, unobservability, or pure sensing. Plan verification corresponds to checking the validity of a given EDL formula. The allowed plans are conditional, and a key point of our framework is that a plan is meaningful if and only if the branching conditions bear on the knowledge of the agent only, and not on the real world (to which that agent may not have access); this leads us to consider "plans that reason" which may contain branching conditions referring to implicit knowledge to be evaluated at execution time.
A logic of universal causation For many commonsense reasoning tasks associated with action domains, only a relatively simple kind of causal knowledge is required - knowledge of the conditions under which facts are caused. This note introduces a modal nonmonotonic logic for representing causal knowledge of this kind, relates it to other nonmonotonic formalisms, and shows that a variety of causal theories of action can be expressed in it, including the recently proposed causal action theories of Lin. The new logic extends the...
Formal Characterization of Active Databases . In this paper we take a first step towards characterizing activedatabases. Declarative characterization of active databases allowsadditional flexibility in studying the effects of different priority criteriabetween fireable rules, different actions and event definitions, andalso to make claims about effects of transaction and prove them withoutactually executing them. Our characterization is related but differentfrom similar attempts by Zaniolo in terms of making a clear distinction...
Query Order We study the effect of query order on computational power and show that ${\rm P}^{{\rm BH}_j[1]:{\rm BH}_k[1]}$\allowbreak---the languages computable via a polynomial-time machine given one query to the $j$th level of the boolean hierarchy followed by one query to the $k$th level of the boolean hierarchy---equals ${\rm R}_{{j+2k-1}{\scriptsize\mbox{-tt}}}^{p}({\rm NP})$ if $j$ is even and $k$ is odd and equals ${\rm R}_{{j+2k}{\scriptsize\mbox{-tt}}}^{p}({\rm NP})$ otherwise. Thus unless the polynomial hierarchy collapses it holds that, for each $1\leq j \leq k$, ${\rm P}^{{\rm BH}_j[1]:{\rm BH}_k[1]} = {\rm P}^{{\rm BH}_k[1]:{\rm BH}_j [1]} \iff (j=k) \lor (j\mbox{ is even}\, \land k=j+1)$. We extend our analysis to apply to more general query classes.
Maintenance of views In relational databases a view definition is a query against the database, and a view materialization is the result of applying the view definition to the current database A view materialization over a database may change as relations in the database undergo modificationsIn this paper a mechanism is proposed in which the view is materialized at all times The problem which this mechanism addresses is how to quickly update the view in response to database changes A structure is maintained which provides information useful in minimizing the amount of work caused by updatesMethods are presented for handling both general databases and the much simpler tree databases (also called acyclic database) In both cases adding or deleting a tuple can be performed in polynomial time For tree databases the degree of the polynomial is independent of the schema structure while for cyclic databases the degree depends on the schema structure The cost of a sequence of tuple additions (deletions) is also analyzed
Adaptive placement of method executions within a customizable distributed object-based runtime system: design, implementation and performance Abstract: This paper presents the design and implementation of a mechanism aimed at enhancing the performance of distributed object-based applications. This goal is achieved by means of a new algorithm implementing placement of method executions that adapts to processors' load and to objects' characteristics, the latter allowing to approximate the cost of methods' re-mote execution The behavior of the proposed placement algorithm is examined by providing performance measures obtained from its integration within a customizable distributed object-based runtime system. In particular, the cost of method executions using our algorithm is compared with the cost resulting from the standard placement technique that consists of executing any method on the storing node of its embedding object.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.028877
0.02963
0.022222
0.022222
0.013231
0.004511
0.001159
0.000095
0.000019
0
0
0
0
0
A Hierarchical Assessment of Adversarial Severity Adversarial Robustness is a growing field that evidences the brittleness of neural networks. Although the literature on adversarial robustness is vast, a dimension is missing in these studies: assessing how severe the mistakes are. We call this notion &#34;Adversarial Severity&#34; since it quantifies the downstream impact of adversarial corruptions by computing the semantic error between the misclassific...
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Low Cost Peer-to-Peer Collaborative Caching for Clusters
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Cluster File System for High Data Availability Using Locality-Aware Partial Replication Most traditional cluster file systems replicate whole files to support the data availability. However, considering that the access patterns of file systems have high localities, replicating whole files is not an efficient strategy to improve the data availability. This paper presents the design and the implementation of HA-PVFS, a PVFS supporting high data availability, in which the replication is performed partially based on the locality information. Our performance results show that HA-PVFS is dynamic with little overhead over various access patterns.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A power saving storage method that considers individual disk rotation Reducing the power consumption of storage systems is now considered a major issue, alongside the maintenance of system reliability, availability, and performance. In this paper, we propose a method named as the Replica-assisted Power Saving Disk Array (RAPoSDA) to reduce the electrical consumption of storage systems. RAPoSDA utilizes a primary backup configuration to ensure system reliability and it dynamically controls the timing and targeting of disk access based on individual disk rotation states. We evaluated the effectiveness of RAPoSDA by developing a simulator that we used for comparing the performance and power consumption of RAPoSDA with Massive Arrays of Inactive Disks (MAID), which is a well-known power reduction disk array. The experimental results demonstrated that RAPoSDA provided superior power reduction and a shorter average response time compared with MAID.
EERAID: energy efficient redundant and inexpensive disk array Recent research works have been presented on conserving energy for multi-disk systems either at a single disk drive level or at a storage system level and thereby having certain limitations. This paper studies several new redundancy-based, power-aware, I/O request scheduling and cache management policies at the RAID controller level to build energy-efficient RAID systems, by exploiting the redundant information and destage issues of the array for two popular RAID levels, RAID 1 and RAID 5. For RAID 1, we develop a Windowed Round Robin (WRR) request scheduling policy; for RAID 5, we introduce a N-chance Power Aware cache replacement algorithm (NPA) for writes and a Power-Directed, Transformable (PDT) request scheduling policy for reads. Trace-driven simulation proves EERAID saves much more energy than legacy RAIDs and existing solutions.
PARAID: a gear-shifting power-aware RAID Reducing power consumption for server computers is important, since increased energy usage causes increased heat dissipation, greater cooling requirements, reduced computational density, and higher operating costs. For a typical data center, storage accounts for 27% of energy consumption. Conventional server-class RAIDs cannot easily reduce power because loads are balanced to use all disks even for light loads. We have built the Power-Aware RAID (PARAID), which reduces energy use of commodity server-class disks without specialized hardware. PARAID uses a skewed striping pattern to adapt to the system load by varying the number of powered disks. By spinning disks down during light loads, PARAID can reduce power consumption, while still meeting performance demands, by matching the number of powered disks to the system load. Reliability is achieved by limiting disk power cycles and using different RAID encoding schemes. Based on our five-disk prototype, PARAID uses up to 34% less power than conventional RAIDs, while achieving similar performance and reliability.
Hibernator: helping disk arrays sleep through the winter Energy consumption has become an important issue in high-end data centers, and disk arrays are one of the largest energy consumers within them. Although several attempts have been made to improve disk array energy management, the existing solutions either provide little energy savings or significantly degrade performance for data center workloads.Our solution, Hibernator, is a disk array energy management system that provides improved energy savings while meeting performance goals. Hibernator combines a number of techniques to achieve this: the use of disks that can spin at different speeds, a coarse-grained approach for dynamically deciding which disks should spin at which speeds, efficient ways to migrate the right data to an appropriate-speed disk automatically, and automatic performance boosts if there is a risk that performance goals might not be met due to disk energy management.In this paper, we describe the Hibernator design, and present evaluations of it using both trace-driven simulations and a hybrid system comprised of a real database server (IBM DB2) and an emulated storage server with multi-speed disks. Our file-system and on-line transaction processing (OLTP) simulation results show that Hibernator can provide up to 65% energy savings while continuing to satisfy performance goals (6.5--26 times better than previous solutions). Our OLTP emulated system results show that Hibernator can save more energy (29%) than previous solutions, while still providing an OLTP transaction rate comparable to a RAID5 array with no energy management.
Extended stable semantics for normal and disjunctive programs
A neural probabilistic language model A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.
On the scale and performance of cooperative Web proxy caching Abstract While algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative- caching performance,in the large-scale World Wide Web en- vironment. This paper uses both trace-based analysis and analytic modelling,to show,the potential advantages and drawbacks of inter-proxy cooperation. With our traces, we evaluate quantitatively the performance-improvement po- tential of cooperation between 200 small-organization prox- ies within a university environment, and between two large- organization proxies handling 23,000 and 60,000 clients, re- spectively. With our model, we extend beyond these popula- tions to project cooperative caching behavior in regions with millions of clients. Overall, we demonstrate that cooperative caching has performance,benefits only within limited popu- lation bounds. We also use our model to examine the impli- cations of future trends in Web-access behavior and traffic.
A trace-driven analysis of the UNIX 4.2 BSD file system
Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, IJCAI 99, Stockholm, Sweden, July 31 - August 6, 1999. 2 Volumes, 1450 pages
DC++: distributed object-oriented system support on top of OSF DCE The OSF Distributed Computing Environment (DCE) is becoming an industry standard for open distributed computing. However, DCE only supports client/server-style applications based on the remote procedure call (RPC) communication model. This paper describes the design and imple- mentation of an extended distributed object-oriented environment, DC++, on top of DCE. As op- posed to RPC, it supports a uniform object model, location independent invocation of fine- grained objects, remote reference parameter passing, dynamic migration of objects between nodes, and C++ language integration. Moreover, the implementation is fully integrated with DCE, using DCE UUIDs for object identification, DCE threads for interobject concurrency, DCE RPC for remote object invocation, and the DCE Cell Directory Service (CDS) for optional re- trieval of objects by name. An additional stub compiler enables automatic generation of C++- based object communication interfaces. Low-level parameter encoding is done by DCE RPC's stub generation facility using the C-based DCE interface definition language (IDL). The system has been fully implemented and tested by implementing an office application. Experi- ences with the existing system and performance results are also reported in the paper. Further- more, a former, less transparent implementation of our group using DCE RPC as a pure transport- level mechanism is compared with the described approach. Related C++ extensions and stan- dardization efforts are also compared with our work.
Encoding Planning Problems in Nonmonotonic Logic Programs . We present a framework for encoding planning problemsin logic programs with negation as failure, having computational efficiencyas our major consideration. In order to accomplish our goal, webring together ideas from logic programming and the planning systemsgraphplan and satplan. We discuss different representations of planningproblems in logic programs, point out issues related to their performance,and show ways to exploit the structure of the domains in theserepresentations....
The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms A fundamental challenge in improving file system performance is to design effective block replacement algorithms to minimize buffer cache misses. Despite the well-known interactions between prefetching and caching, almost all buffer cache replacement algorithms have been proposed and studied comparatively, without taking into account file system prefetching, which exists in all modern operating systems. This paper shows that such kernel prefetching can have a significant impact on the relative performance in terms of the number of actual disk I/Os of many well-known replacement algorithms; it can not only narrow the performance gap but also change the relative performance benefits of different algorithms. Moreover, since prefetching can increase the number of blocks clustered for each disk I/O and, hence, the time to complete the I/O, the reduction in the number of disk I/Os may not translate into proportional reduction in the total I/O time. These results demonstrate the importance of buffer caching research taking file system prefetching into consideration and comparing the actual disk I/Os and the execution time under different replacement algorithms.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.011765
0.009091
0.006452
0
0
0
0
0
0
0
0
0
0
Convolutional adaptive denoising autoencoders for hierarchical feature extraction. Convolutional neural networks (CNNs) are typical structures for deep learning and are widely used in image recognition and classification. However, the random initialization strategy tends to become stuck at local plateaus or even diverge, which results in rather unstable and ineffective solutions in real applications. To address this limitation, we propose a hybrid deep learning CNN-AdapDAE model, which applies the features learned by the AdapDAE algorithm to initialize CNN filters and then train the improved CNN for classification tasks. In this model, AdapDAE is proposed as a CNN pre-training procedure, which adaptively obtains the noise level based on the principle of annealing, by starting with a high level of noise and lowering it as the training progresses. Thus, the features learned by AdapDAE include a combination of features at different levels of granularity. Extensive experimental results on STL-10, CIFAR-10, andMNIST datasets demonstrate that the proposed algorithm performs favorably compared to CNN (random filters), CNNAE (pre-training filters by autoencoder), and a few other unsupervised feature learning methods.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
BioSEAL: In-Memory Biological Sequence Alignment Accelerator for Large-Scale Genomic Data. Genome sequences contain hundreds of millions of DNA base pairs. Finding the degree of similarity between two genomes requires executing a compute-intensive dynamic programming algorithm, such as Smith-Waterman. Traditional von Neumann architectures have limited parallelism and cannot provide an efficient solution for large-scale genomic data. Approximate heuristic methods (e.g. BLAST) are commonly used. However, they are suboptimal and still compute-intensive. In this work, we present BioSEAL, a biological sequence alignment accelerator. BioSEAL is a massively parallel non-von Neumann processing-in-memory architecture for large-scale DNA and protein sequence alignment. BioSEAL is based on resistive content addressable memory, capable of energy-efficient and highperformance associative processing. We present an associative processing algorithm for entire database sequence alignment on BioSEAL and compare its performance and power consumption with state-of-art solutions. We show that BioSEAL can achieve up to 57× speedup and 156× better energy efficiency, compared with existing solutions for genome sequence alignment and protein sequence database search.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Shortest Reconfiguration Of Matchings Imagine that unlabelled tokens are placed on edges forming a matching of a graph. A token can be moved to another edge provided that the edges containing tokens remain a matching. The distance between two configurations of tokens is the minimum number of moves required to transform one into the other. We study the problem of computing the distance between two given configurations. We prove that if source and target configurations are maximal matchings, then the problem admits no polynomial-time sublogarithmic-factor approximation algorithm unless P = NP. On the positive side, we show that for matchings of bipartite graphs the problem is fixed-parameter tractable parameterized by the size d of the symmetric difference of the two given configurations. Furthermore, we obtain a d(epsilon)-factor approximation algorithm for the distance of two maximum matchings of bipartite graphs for every epsilon > 0. The proofs of our positive results are constructive and can hence be turned into algorithms that output shortest transformations. Both algorithmic results rely on a close connection to the DIRECTED STEINER TREE problem. Finally, we show that determining the exact distance between two configurations is complete for the class D-P, and determining the maximum distance between any two configurations of a given graph is D-P-hard.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Efficient top-down computation of queries under the well-founded semantics The well-founded model provides a natural and robust semantics for logic programs with negative literals in rule bodies. Although various procedural semantics have been proposed for query evaluation under the well-founded semantics, the practical issues of implementation for effective and efficient computation of queries have been rarely discussed.
A Logic Programming System for Nonmonotonic Reasoning The evolution of logic programming semantics has included the introduction of a new explicit form of negation, beside the older implicit (or default) negation typical of logic programming. The richer language has been shown adequate for a spate of knowledge representation and reasoning forms.
Proving Termination of General Prolog Programs We study here termination of general logic programs with the Prolog selection rule. To this end we extend the approach of Apt and Pedreschi [AP90] and consider the class of left terminating general programs. These are general logic programs that terminate with the Prolog selection rule for all ground goals. We introduce the notion of an acceptable program and prove that acceptable programs are left terminating. This provides us with a practical method of proving termination.
Expanding queries to incomplete databases by interpolating general logic programs In databases, queries are usually defined on complete databases. In this paper we introduce and motivate the notion of extended queries that are defined on incomplete databases. We argue that the language of extended logic program is appropriate for representing extended queries. We show through examples that given a query, a particular extension of it has important characteristics which corresponds to removal of the CWA from the original specification of the query. We refer to this particular extension as the expansion of the original query. Normally queries are expressed as general logic programs. We develop an algorithm that given a general logic program (satisfying certain syntactic properties) expressing a query constructs an extended logic program that expresses the expanded query. The extended logic program is referred to as the interpolation of the given general logic program.
A new definition of SLDNF-resolution We propose a new, &quot;top-down&quot; definition of SLDNF-resolution which retains the spiritof the original definition but avoids the difficulties noted in the literature. We compare itwith the &quot;bottom-up&quot; definition of Kunen [Kun89].1 The problemThe notion of SLD-resolution of Kowalski [Kow74] allows us to resolve only positive literals.As a result it is not adequate to compute with general programs. Clark [Cla79] proposed toincorporate the negation as finite failure rule. This leads to an...
Pushing Goal Derivation in DLP Computations dlv is a knowledge representation system, based on disjunctive logic programming, which offers front-ends to several advanced KR formalisms. This paper describes new techniques for the computation of answer sets of disjunctive logic programs, that have been developed and implemented in the dlv system. These techniques try to "push" the query goals in the process of model generation (query goals are often present either explicitly, like in planning and diagnosis, or implicitly in the form of integrity constraints). This way, a lot of useless models are discarded "a priori" and the computation converges rapidly toward the generation of the "right" answer set. A few preliminary benchmarks show dramatic efficiency gains due to the new techniques.
Dependent Fluents We discuss the persistence of the indirect ef­ fects of an action—the question when such ef­ fects are subject to the commonsense law of in­ ertia, and how to describe their evolution in the cases when inertia does not apply. Our model of nonpersistent effects involves the assumption that the value of the fluent in question is deter­ mined by the values of other fluents, although the dependency may be partially or completely unknown. This view leads us to a new high- level action language ARD (for Actions, Ram­ ifications and Dependencies) that is capable of describing both persistent and nonpersistent ef­ fects. Unlike the action languages introduced in the past, ARD is "non-Markovia n," in the sense that the evolution of the fluents described in this language may depend on their history, and not only on their current values.
What is planning in the presence of sensing? Despite the existence of programs that are able to generate so-called conditional plans, there has yet to emerge a clear and general specification of what it is these programs are looking for: what exactly is a plan in this setting, and when is it correct? In this paper, we develop and motivate a specification within the situation calculus of conditional and iterative plans over domains that include binary sensing actions. The account is built on an existing theory of action which includes a solution to the frame problem, and an extension to it that handles sensing actions and the effect they have on the knowledge of a robot. Plans are taken to be programs in a new simple robot program language, and the planning task is to find a program that would be known by the robot at the outset to lead to a final situation where the goal is satisfied. This specification is used to analyze the correctness of a small example plan, as well as variants that have redundant or missing sensing actions. We also investigate whether the proposed robot program language is powerful enough to serve for any intuitively achievable goal.
From theory to practice: the UTEP robot in the AAAI 96 and AAAI 97 robot contests In this paper we describe the control aspects of Diablo, theUTEP mobile robot participant in two AAAI robot competitions.In the first competition, event one of the AAAI 96robot contest, Diablo consistently scored 2851out of a totalof 295 points. In the second competition, our robot wonthe first place in the event &quot;Tidy Up&quot; of the home vacuumcontest. The main goal in this paper will be to show howthe agent theories - based on action theories -- developed atUTEP and by Saffiotti et...
Ramification and causality in a modal action logic The paper presents a logic for action theory based on a modal language, where modalities represent actions. The frame problem is tackled by using a nonmonotonic formalism which maximizes persistency assumptions. The problem of ramification is tackled by introducing a modal causality operator which is used to represent causal rules. Assumptions on the value of fluents in the initial state allow rea...
Planning by rewriting Domain-independent planning is a hard combinatorial problem. Taking into account plan quality makes the task even more dificult. This article introduces Planning by Rewriting (PbR), a new paradigm for efficient high-quality domain-independent planning. PbR exploits declarative plan-rewriting rules and efficient local search techniques to transform an easy-to-generate, but possibly suboptimal, initial plan into a high-quality plan. In addition to addressing the issues of planning efficiency and plan quality, this framework offers a new anytime planning algorithm. We have implemented this planner and applied it to several existing domains. The experimental results show that the PbR approach provides significant savings in planning effort while generating high-quality plans.
File system design for an NFS file server appliance Network Appliance Corporation recently began shipping a new kind of network server called an NFS file server appliance, which is a dedicated server whose sole function is to provide NFS file service. The file system requirements for an NFS appliance are different from those for a general-purpose UNIX system, both because an NFS appliance must be optimized for network file access and because an appliance must be easy to use. This paper describes WAFL (Write Anywhere File Layout), which is a file system designed specifically to work in an NFS appliance. The primary focus is on the algorithms and data structures that WAFL uses to implement Snapshotst, which are read-only clones of the active file system. WAFL uses a copy-on-write technique to minimize the disk space that Snapshots consume. This paper also describes how WAFL uses Snapshots to eliminate the need for file system consistency checking after an unclean shutdown.
Striping in large tape libraries
Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture.
1.032801
0.025036
0.017662
0.013419
0.01257
0.005589
0.00313
0.000627
0.000036
0.000007
0
0
0
0
DeepAM: a heterogeneous deep learning framework for intelligent malware detection. With computers and the Internet being essential in everyday life, malware poses serious and evolving threats to their security, making the detection of malware of utmost concern. Accordingly, there have been many researches on intelligent malware detection by applying data mining and machine learning techniques. Though great results have been achieved with these methods, most of them are built on shallow learning architectures. Due to its superior ability in feature learning through multilayer deep architecture, deep learning is starting to be leveraged in industrial and academic research for different applications. In this paper, based on the Windows application programming interface calls extracted from the portable executable files, we study how a deep learning architecture can be designed for intelligent malware detection. We propose a heterogeneous deep learning framework composed of an AutoEncoder stacked up with multilayer restricted Boltzmann machines and a layer of associative memory to detect newly unknown malware. The proposed deep learning model performs as a greedy layer-wise training operation for unsupervised feature learning, followed by supervised parameter fine-tuning. Different from the existing works which only made use of the files with class labels (either malicious or benign) during the training phase, we utilize both labeled and unlabeled file samples to pre-train multiple layers in the heterogeneous deep learning framework from bottom to up for feature learning. A comprehensive experimental study on a real and large file collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed deep learning framework can further improve the overall performance in malware detection compared with traditional shallow learning methods, deep learning methods with homogeneous framework, and other existing anti-malware scanners. The proposed heterogeneous deep learning framework can also be readily applied to other malware detection tasks.
Supervised deep learning with auxiliary networks Deep learning well demonstrates its potential in learning latent feature representations. Recent years have witnessed an increasing enthusiasm for regularizing deep neural networks by incorporating various side information, such as user-provided labels or pairwise constraints. However, the effectiveness and parameter sensitivity of such algorithms have been major obstacles for putting them into practice. The major contribution of our work is the exposition of a novel supervised deep learning algorithm, which distinguishes from two unique traits. First, it regularizes the network construction by utilizing similarity or dissimilarity constraints between data pairs, rather than sample-specific annotations. Such kind of side information is more flexible and greatly mitigates the workload of annotators. Secondly, unlike prior works, our proposed algorithm decouples the supervision information and intrinsic data structure. We design two heterogeneous networks, each of which encodes either supervision or unsupervised data structure respectively. Specifically, we term the supervision-oriented network as \"auxiliary network\" since it is principally used for facilitating the parameter learning of the other one and will be removed when handling out-of-sample data. The two networks are complementary to each other and bridged by enforcing the correlation of their parameters. We name the proposed algorithm SUpervision-Guided AutoencodeR (SUGAR). Comparing prior works on unsupervised deep networks and supervised learning, SUGAR better balances numerical tractability and the flexible utilization of supervision information. The classification performance on MNIST digits and eight benchmark datasets demonstrates that SUGAR can effectively improve the performance by using the auxiliary networks, on both shallow and deep architectures. Particularly, when multiple SUGARs are stacked, the performance is significantly boosted. On the selected benchmarks, ours achieve up to 11.35% relative accuracy improvement compared to the state-of-the-art models.
A practical tutorial on autoencoders for nonlinear feature fusion: Taxonomy, models, software and guidelines. •Autoencoders are a growing family of tools for nonlinear feature fusion.•A taxonomy of these methods is proposed, detailing each one of them.•Comparisons to other feature fusion techniques and applications are studied.•Guidelines on autoencoder design and example results are provided.•Available software for building autoencoders is summarized.
A Better Way to Pretrain Deep Boltzmann Machines.
Extracting and composing robust features with denoising autoencoders Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.
The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz.
Affinity analysis of coded data sets Coded data sets are commonly used as compact representations of real world processes. Such data sets have been studied within various research fields from association mining, data warehousing, knowledge discovery, collaborative filtering to machine learning. However, previous studies on coded data sets have introduced methods for the analysis of rather small data sets. This study proposes applying information retrieval for enabling high performance analysis of data masses that scale beyond traditional approaches. Part of this PHD study focuses on new type of kernel projection functions that can be used to find similarities in spare discrete data spaces. This study presents experimental results how information retrieval indexes scale and outperform two common relational data schemas with a leading commercial DBMS for market basket analysis.
Internet of Things (IoT): A vision, architectural elements, and future directions Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
Unsupervised Learning of Multiple Motifs in Biopolymers Using Expectation Maximization The MEME algorithm extends the expectation maximization (EM) algorithm for identifying motifs in unaligned biopolymer sequences. The aim of MEME is to discover new motifs in a set of biopolymer sequences where little or nothing is known in advance about any motifs that may be present. MEME innovations expand the range of problems which can be solved using EM and increase the chance of finding good solutions. First, subsequences which actually occur in the biopolymer sequences are used as starting points for the EM algorithm to increase the probability of finding globally optimal motifs. Second, the assumption that each sequence contains exactly one occurrence of the shared motif is removed. This allows multiple appearances of a motif to occur in any sequence and permits the algorithm to ignore sequences with no appearance of the shared motif, increasing its resistance to noisy data. Third, a method for probabilistically erasing shared motifs after they are found is incorporated so that several distinct motifs can be found in the same set of sequences, both when different motifs appear in different sequences and when a single sequence may contain multiple motifs. Experiments show that MEME can discover both the CRP and LexA binding sites from a set of sequences which contain one or both sites, and that MEME can discover both the −10 and −35 promoter regions in a set of E. coli sequences.
Planning as search: a quantitative approach We present the thesis that planning can be viewed as problem-solving search using subgoals, macro-operators, and abstraction as knowledge sources. Our goal is to quantify problem-solving performance using these sources of knowledge. New results include the identification of subgoal distance as a fundamental measure of problem difficulty, a multiplicative time-space tradeoff for macro-operators, and an analysis of abstraction which concludes that abstraction hierarchies can reduce exponential problems to linear complexity.
Application performance and flexibility on exokernel systems The exokemel operating system architecture safely gives untrusted software efficient control over hardware and software resources by separating management from protection. This paper describes an exokemel system that allows specialized applications to achieve high performance without sacrificing the performance of unmodified UNIX programs. It evaluates the exokemel architecture by measuring end-to-end application performance on Xok, an exokernel for Intel x86-based computers, and by comparing Xok’s performance to the performance of two widely-used 4.4BSD UNIX systems (FreeBSD and OpenBSD). The results show that common unmodified UNIX applications can enjoy the benefits of exokernels: applications either perform comparably on Xok/ExOS and the BSD UNIXes, or perform significantly better. In addition, the results show that customized applications can benefit substantially from control over their resources (e.g., a factor of eight for a Web server). This paper also describes insights about the exokemel approach gained through building three different exokemel systems, and presents novel approaches to resource multiplexing.
Optimizing the Embedded Caching and Prefetching Software on a Network-Attached Storage System As the speed gap between memory and disk is so large today, caching and prefetch are critical to enterprise class storage applications, which demands high performance. In this paper, we present our study on performance of a mid-range storage server produced by the Quanta Computer Incorporation. We first analyzed the existing caching mechanism in the server and then developed a fast caching methodology to reduce the cache access latency and processing overhead of the storage controller. In addition, we proposed a new adaptive prefetch scheme reduces the average disk access time seen by the host. Via trace-driven simulation, we evaluated the performance of our new caching and adaptive prefetch schemes. Our results showed the performance improvement for the TPC-C on-line transaction benchmark.
Representing the process semantics in the situation calculus This paper presents a formal method based on the high-level semantics of processes to reason about continuous change. With a case study we show how the semantics of processes can be integrated with the situation calculus. The soundness and completeness of situation calculus with respect to the process semantics are proven. Furthermore, the logical programming is implemented to support the semantics of processes with the situation calculus.
Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture.
1.2
0.2
0.1
0.028571
0.000707
0
0
0
0
0
0
0
0
0
Causal Logic This paper proposes a logic for causal based on event trees. Event trees provide a natural and familiar framework for prob- ability and decision theory, but they lack the modularity that would make them convenient models for a logic. Our logic overcomes this deficiency by elaborating the idea of an event tree to that of an event space. This is a space containing sit- uations or instantaneous events, which are related by relations of precedence, refinement, and impliation.
Occurrences and Narratives as Constraints in the Branching Structure of the Situation Calculus. The Situation Calculus is a logic of time and change in which there is a distinguished initial situation, and all other situations arise from the different sequences of actions that might be performed starting in the initial one. Within this framework, it is difficult to incorporate the notion of an occurrence, since all situations after the initial one are hypothetical. These occurrences are impo...
A logic-based calculus of events Formal Logic can be used to represent knowledge of many kinds for many purposes. It can be used to formalize programs, program specifications, databases, legislation, and natural language in general. For many such applications of logic a representation of time is necessary. Although there have been several attempts to formalize the notion of time in classical first-order logic, it is still widely believed that classical logic is not adequate for the representation of time and that some form of non-classical Temporal Logic is needed. In this paper, we shall outline a treatment of time, based on the notion of event, formalized in the Horn clause subset of classical logic augmented with negation as failure. The resulting formalization is executable as a logic program. We use the term ''event calculus'' to relate it to the well-known ''situation calculus'' (McCarthy and Hayes 1969). The main difference between the two is conceptual: the situation calculus deals with global states whereas the event calculus deals with local events and time periods. Like the event calculus, the situation calculus can be formalized by means of Horn clauses augmented with negation by failure (Kowalski 1979). The main intended applications investigated in this paper are the updating of data- bases and narrative understanding. In order to treat both cases uniformly we have taken the view that an update consists of the addition of new knowledge to a knowledge base. The effect of explicit deletion of information in conventional databases is obtained without deletion by adding new knowledge about the end of the period of time for which the information holds.
Representing actions: Laws, observations and hypotheses We propose a modificationL 1 of the action description languageA. The languageL 1 allows representation of hypothetical situations and hypothetical occurrence of actions (as inA) as well as representation of actual occurrences of actions and observations of the truth values of fluents in actual situations. The corresponding entailment relation formalizes various types of common-sense reasoning about actions and their effects not modeled by previous approaches. As an application of L1 we also present an architecture for intelligent agents capable of observing, planning and acting in a changing environment based on the entailment relation of L1 and use logic programming approximation of this entailment to implement a planning module for this architecture. We prove the soundness of our implementation and give a sufficient condition for its completeness.
Extended stable semantics for normal and disjunctive programs
A neural probabilistic language model A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.
On the scale and performance of cooperative Web proxy caching Abstract While algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative- caching performance,in the large-scale World Wide Web en- vironment. This paper uses both trace-based analysis and analytic modelling,to show,the potential advantages and drawbacks of inter-proxy cooperation. With our traces, we evaluate quantitatively the performance-improvement po- tential of cooperation between 200 small-organization prox- ies within a university environment, and between two large- organization proxies handling 23,000 and 60,000 clients, re- spectively. With our model, we extend beyond these popula- tions to project cooperative caching behavior in regions with millions of clients. Overall, we demonstrate that cooperative caching has performance,benefits only within limited popu- lation bounds. We also use our model to examine the impli- cations of future trends in Web-access behavior and traffic.
A trace-driven analysis of the UNIX 4.2 BSD file system
Synchronized Disk Interleaving A group of disks may be interleaved to speed up data transfers in a manner analogous to the speedup achieved by main memory interleaving. Conventional disks may be used for interleaving by spreading data across disks and by treating multiple disks as if they were a single one. Furthermore, the rotation of the interleaved disks may be synchronized to simplify control and also to optimize performance. In addition, check- sums may be placed on separate check-sum disks in order to improve reliability. In this paper, we study synchronized disk interleaving as a high-performance mass storage system architecture. The advantages and limitations of the proposed disk interleaving scheme are analyzed using the M/G/1 queueing model and compared to the conventional disk access mechanism.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Exploring Gate-Limited Analytical Models for High Performance Network Storage Servers
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.028571
0.002532
0.002083
0
0
0
0
0
0
0
0
0
0
Expressive equivalence of planning formalisms A concept of expressive equivalence for planning formalisms based on polynomialtransformations is defined. It is argued that this definition is reasonable and usefulboth from a theoretical and from a practical perspective; if two languages areequivalent, then theoretical results carry over and, more practically, we can modelan application problem in one language and then easily use a planner for the otherlanguage. In order to cope with the problem of exponentially sized solutions for...
Action Constraints for Planning Recent progress in the applications of propositionalplanning systems has led to an impressive speed-up of solutiontime and an increase in tractable problem size. In part, thisimprovement stems from the use of domain-dependent knowledgein form of state constraints. In this paper we introduce adifferent class of constraints: action constraints. They expressdomain-dependent knowledge about the use of actions in solutionplans and can express strategies which are used by humanplanners. The...
Strong bounds on the approximability of two Pspace-hard problems in propositional planning The computational complexity of planning with Strips&dash;style operators has received a considerable amount of interest in the literature. However, the approximability of such problems has only received minute attention. We study two Pspace&dash;hard optimization versions of propositional planning and provide tight upper and lower bounds on their approximability.
What Is the Expressive Power of Disjunctive Preconditions? While there seems to be a general consensus about the expressive powerof a number of language features in planning formalisms, one can nd manydierent statements about the expressive power of disjunctive preconditions.Using the \compilability framework,&quot; we show that preconditionsin conjunctive normal form add to the expressive power of propositionalstrips, which conrms a conjecture by Backstrom. Further, we show thatpreconditions in conjunctive normal form do not add any expressive...
Optimal Planning in the Presence of Conditional Effects: Extending LM-Cut with Context Splitting. The LM-Cut heuristic is currently the most successful heuristic in optimal STRIPS planning but it cannot be applied in the presence of conditional effects. Keyder, Hoffmann and Haslum recently showed that the obvious extensions to such effects ruin the nice theoretical properties of LM-Cut. We propose a new method based on context splitting that preserves these properties.
A survey of computational complexity results in systems and control The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control theory, and (b) to survey the relatively recent research activity lying at the interface between these fields. We begin with a brief introduction to models of computation, the concepts of undecidability, polynomial-time algorithms, NP-completeness, and the implications of intractability results. We then survey a number of problems that arise in systems and control theory, some of them classical, some of them related to current research. We discuss them from the point of view of computational complexity and also point out many open problems. In particular, we consider problems related to stability or stabilizability of linear systems with parametric uncertainty, robust control, time-varying linear systems, nonlinear and hybrid systems, and stochastic optimal control.
Planning in polynomial time: the SAS-PUBS class This article describes a polynomial-time, O(n&amp;sup3;), planning algorithm for a limited class of planning problems. Compared to previous work on complexity of algorithms for knowledge-based or logic-based planning, our algorithm achieves computational tractability, but at the expense of only applying to a significantly more limited class of problems. Our algorithm is proven correct, and it always returns a parallel minimal plan if there is a plan at all.
On the undecidability of probabilistic planning and infinite-horizon partially observable Markov decision problems We investigate the computability of problems in probabilistic planning and partially observable infinite-horizon Markov decision processes. The undecidability of the string-existence problem for probabilistic finite automata is adapted to show that the following problem of plan existence in probabilistic planning is undecidable: given a probabilistic planning problem, determine whether there exists a plan with success probability exceeding a desirable threshold. Analogous policy-existence problems for partially observable infinite-horizon Markov decision processes under discounted and undiscounted total reward models, average-reward models, and state-avoidance models are all shown to be undecidable. The results apply to corresponding approximation problems as well.
Accuracy of admissible heuristic functions in selected planning domains The efficiency of optimal planning algorithms based on heuristic search crucially depends on the accuracy of the heuristic function used to guide the search. Often, we are interested in domain-independent heuristics for planning. In order to assess the limitations of domain-independent heuristic planning, we analyze the (in)accuracy of common domain-independent planning heuristics in the IPC benchmark domains. For a selection of these domains, we analytically investigate the accuracy of the h+ heuristic, the hm family of heuristics, and certain (additive) pattern database heuristics, compared to the perfect heuristic h*. Whereas h+and additive pattern database heuristics usually return cost estimates proportional to the true cost, non-additive hm and nonadditive pattern-database heuristics can yield results underestimating the true cost by arbitrarily large factors.
What are the Limitations of the Situation Calculus? The situation calculus [8] is a methodology for expressing facts about action and change in formal languages of mathematical logic. It involves expressions for situations and actions (events), and the function Result that relates them to each other. The possibilities and limitations of this methodology have never been systematically investigated, and some of the commonly accepted views on this subject seem to be inaccurate.
The complexity of belief update Abstract Belief revision and belief update are two different forms of belief change, and they serve dierent,purposes. In this paper we focus on belief update, the formalization of change in beliefs due to changes in the world. The complexity of the basic update (introduced by Winslett [1990]) has been determined in [Eiter and Gottlob, 1992]. Since then, many other formalizations have been proposed to overcome the limitations and drawbacks of Winslett’s update. In this paper we analyze the complexity of the proposals presented in the literature, and relate some of them to previous work on closed world reasoning.
Anticipatory scheduling: a disk scheduling framework to overcome deceptive idleness in synchronous I/O Disk schedulers in current operating systems are generally work-conserving, i.e., they schedule a request as soon as the previous request has finished. Such schedulers often require multiple outstanding requests from each process to meet system-level goals of performance and quality of service. Unfortunately, many common applications issue disk read requests in a synchronous manner, interspersing successive requests with short periods of computation. The scheduler chooses the next request too early; this induces deceptive idleness, a condition where the scheduler incorrectly assumes that the last request issuing process has no further requests, and becomes forced to switch to a request from another process.We propose the anticipatory disk scheduling framework to solve this problem in a simple, general and transparent way, based on the non-work-conserving scheduling discipline. Our FreeBSD implementation is observed to yield large benefits on a range of microbenchmarks and real workloads. The Apache webserver delivers between 29% and 71% more throughput on a disk-intensive workload. The Andrew filesystem benchmark runs faster by 8%, due to a speedup of 54% in its read-intensive phase. Variants of the TPC-B database benchmark exhibit improvements between 2% and 60%. Proportional-share schedulers are seen to achieve their contracts accurately and efficiently.
The Diffculty of Training Deep Architectures and the Effect of Unsupervised Pre-Training Whereas theoretical work suggests that deep ar- chitectures might be more e cient at represent- ing highly-varying functions, training deep ar- chitectures was unsuccessful until the recent ad- vent of algorithms based on unsupervised pre- training. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this di cult learning problem. ...
Anatomical Structure Sketcher For Cephalograms By Bimodal Deep Learning The lateral cephalogram is a commonly used medium to acquire patient-specific morphology for diagnose and treatment planning in clinical dentistry. The robust anatomical structure detection and accurate annotation remain challenging considering the personal skeletal variations and image blurs caused by device-specific projection magnification, together with structure overlapping in the lateral cephalograms. We propose a novel cephalogram sketcher system, where the contour extraction of anatomical structures is formulated as a cross-modal morphology transfer from regular image patches to arbitrary curves. Specifically, the image patches of structures of interest are located by a hierarchical pictorial model. The automatic contour sketcher converts the image patch to a morphable boundary curve via a bimodal deep Boltzmann machine. The deep machine learns a joint representation of patch textures and contours, and forms a path from one modality (patches) to the other (contours). Thus, the sketcher can infer the contours by alternating Gibbs sampling along the path in a manner similar to the data completion. The proposed method is robust not only to structure detection, but also tends to produce accurate structure shapes and landmarks even in blurry X-ray images. The experiments performed on clinically captured cephalograms demonstrate the effectiveness of our method.
1.010601
0.009481
0.008891
0.006032
0.005003
0.004428
0.002312
0.001222
0.000114
0.000032
0.000002
0
0
0
Speaker Emotion Recognition: From Classical Classifiers To Deep Neural Networks Speaker emotion recognition is considered among the most challenging tasks in recent years. In fact, automatic systems for security, medicine or education can be improved when considering the speech affective state. In this paper, a twofold approach for speech emotion classification is proposed. At the first side, a relevant set of features is adopted, and then at the second one, numerous supervised training techniques, involving classic methods as well as deep learning, are experimented. Experimental results indicate that deep architecture can improve classification performance on two affective databases, the Berlin Dataset of Emotional Speech and the SAVEE Dataset Surrey Audio-Visual Expressed Emotion.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Reconstruction of transcriptional regulatory networks by stability-based network component analysis. Reliable inference of transcription regulatory networks is a challenging task in computational biology. Network component analysis (NCA) has become a powerful scheme to uncover regulatory networks behind complex biological processes. However, the performance of NCA is impaired by the high rate of false connections in binding information. In this paper, we integrate stability analysis with NCA to form a novel scheme, namely stability-based NCA (sNCA), for regulatory network identification. The method mainly addresses the inconsistency between gene expression data and binding motif information. Small perturbations are introduced to prior regulatory network, and the distance among multiple estimated transcript factor (TF) activities is computed to reflect the stability for each TF's binding network. For target gene identification, multivariate regression and t-statistic are used to calculate the significance for each TF-gene connection. Simulation studies are conducted and the experimental results show that sNCA can achieve an improved and robust performance in TF identification as compared to NCA. The approach for target gene identification is also demonstrated to be suitable for identifying true connections between TFs and their target genes. Furthermore, we have successfully applied sNCA to breast cancer data to uncover the role of TFs in regulating endocrine resistance in breast cancer.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Fault-Tolerant node-to-set disjoint-path routing in hypercubes Hypercube-based interconnection networks are one of the most popular network topologies when dealing with parallel systems In a hypercube of dimension n, node addresses are made of n bits, and two nodes are adjacent if and only if their Hamming distance is equal to 1 In this paper we introduce a fault-tolerant hypercube routing algorithm which constructs n minus the number of faulty nodes f disjoint paths connecting one source node to k=n−f destination nodes in O(n2) time complexity.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
ALV: A New Data Redistribution Approach to RAID-5 Scaling When a RAID-5 volume is scaled up with added disks, data have to be redistributed from original disks to all disks including the original and the new. Existing online scaling techniques suffer from long redistribution times as well as negative impacts on application performance. By leveraging our insight into a reordering window, this paper presents ALV, a new data redistribution approach to RAID-5 scaling. The reordering window is a result of the natural space hole as data being redistributed, and it grows in size. The data inside the reordering window can migrate in any order without overwriting other in-use data chunks. The ALV approach exploits three novel techniques. First, ALV changes the movement order of data chunks to access multiple successive chunks via a single I/O. Second, ALV updates mapping metadata lazily to minimize the number of metadata writes while ensuring data consistency. Third, ALV uses an on/off logical valve to adaptively adjust the redistribution rate depending on application workload. We implemented ALV in Linux Kernel 2.6.18 and evaluated its performance by replaying three real-system traces: TPC-C, Cello-99, and SPC-Web. The results demonstrated that ALV outperformed the conventional approach consistently by 53.31-73.91 percent in user response time and by 24.07-29.27 percent in redistribution time.
Bus Modelling in Zoned Disks RAID Storage Systems. A model of bus contention in a Multi-RAID storage architecture is presented. Based on an M/G/1 queue, the main issues are to determine the service time distribution that accurately represents the highly mixed input traffic of requests. This arises from the coexistence of different RAID organisations that generate several types of physical request (read/write for each RAID level) with different related sizes. The size distributions themselves are made more complex by the striping mechanism, with full/large/small stripes in RAID5. We show the impact of the bus traffic on the system's overall performance as predicted by the model and validated against a simulation of the hardware, using common workload assumptions.
RAID level selection for heterogeneous disk arrays Heterogeneous Disk Arrays (HDAs) allow resource sharing of their hardware by multiple RAID levels. RAID1 (mirrored disks) and RAID5 (distributed parity arrays) are the two RAID levels considered in this study. They are both single disk failure tolerant (1DFT), but differ significantly in their efficiency in processing database workloads. The goal of the study is to maximize the number of Virtual Array (VA) allocations in HDA. We develop an analysis to estimate the load per VA based on a few parameters: the fraction of accesses to small versus large blocks and the fraction of updates versus reads. A VA is allocated according to the RAID level, which minimizes the anticipated load based on input parameters. Operation in normal and degraded mode is considered for comparison purposes, but in fact allocations are carried out using the higher load in degraded mode to ensure that single disk failures will not result in overload. We report on parametric studies to gain insight into circumstances leading to a RAID1 or RAID5 classification. An allocation experiment with a synthetic workload is used to demonstrate the superiority of HDA with respect to purely RAID1 or RAID5 disk arrays. This analytic study can be extended to 2DFT arrays, namely RAID6 versus 3-way replication.
A multiple disk failure recovery scheme in RAID systems In this paper, we propose a practical disk error recovery scheme tolerating multiple simultaneous disk failures in a typical RAID system, resulting in improvement in availability and reliability. The scheme is composed of the encoding and the decoding processes. The encoding process is defined by making one horizontal parity and a number of vertical parities. The decoding process is defined by a data recovering method for multiple disk failures including the parity disks. The proposed error recovery scheme is proven to correctly recover the original data for multiple simultaneous disk failures regardless of the positions of the failed disks. The proposed error recovery scheme only uses exclusive OR operations and simple arithmetic operations, which can be easily implemented on current RAID systems without hardware changes.
A Highly Accurate Method for Assessing Reliability of Redundant Arrays of Inexpensive Disks (RAID) Abstract - The statistical bases for current models of RAID reliability are reviewed and a highly accurate alternative is provided and justified. This new model corrects statistical errors associated with the pervasive assumption that system (RAID group) times to failure follow a homogeneous Poisson process, and corrects errors associated with assuming the time-to-failure and time-to-restore distributions are exponentially distributed. Statistical justification for the new model uses theory for reliability of repairable systems. Four critical component distributions are developed from field data. These distributions are for times to catastrophic failure, reconstruction and restoration, read errors, and disk data scrubs. Model results have been verified and predict between 2 to 1,500 times as many double disk failures as estimates made using the mean time to data loss method. Model results are compared to system level field data for RAID group of 14 drives and show excellent correlation and greater accuracy than either MTTDL.
Efficient, distributed data placement strategies for storage area networks (extended abstract) In the last couple of years a dramatic growth of enterprise data storage capacity can be observed. As a result, new strategies have been sought that allow servers and storage being centralized to better manage the explosion of data and the overall cost of ownership. Nowadays, a common approach is to combine storage devices into a dedicated network that is connected to LANs and/or servers. Such networks are usually called storage area networks (SAN). A very important aspect for these networks is scalability. If a SAN undergoes changes (for instance, due to insertions or removals of disks), it may be necessary to replace data in order to allow an efficient use of the system. To keep the influence of data replacements on the performance of the SAN small, this should be done as efficiently as possible.In this paper, we investigate the problem of evenly distributing and efficiently locating data in dynamically changing SANs. We consider two scenarios: (1) all disks have the same capacity, and (2) the capacities of the disks are allowed to be arbitrary. For both scenarios, we present placement strategies capable of locating blocks efficiently and that are able to quickly adjust the data placement to insertions or removals of disks or data blocks. Furthermore, we study how the performance of our placement strategies changes if we allow to waste a certain amount of capacity of the disks.
New Efficient MDS Array Codes for RAID Part I: Reed-Solomon-Like Codes for Tolerating Three Disk Failures This paper presents a class of binary Maximum Distance Separable (MDS) array codes for tolerating disk failures in Redundant Arrays of Inexpensive Disks (RAID) architecture based on circular permutation matrices. The size of the information part is m \times n, the size of the parity-check part is m \times 3, and the minimum distance is 4, where n is the number of information disks, the number of parity-check disks is 3, and (m+1) is a prime integer. In practical applications, m can be very large and n is from 20 to 50. The code rate is R = {\frac{n}{n+3}}. These codes can be used for tolerating three disk failures. The encoding and decoding of the Reed-Solomon-like codes are very fast. There need to be 3mn XOR operations for encoding and (3mn+9(m+1)) XOR operations for decoding.
Database Recovery Using Redundant Disk Arrays
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Semantically-Smart Disk Systems We propose and evaluate the concept of a semantically-smart disk system (SDS). As opposed to a traditional ìsmartî disk, an SDS has detailed knowledge of how the le system above is us- ing the disk system, including information about the on-disk data structures of the le system. An SDS exploits this knowledge to transparently improve performance or enhance functionality be- neath a standard block read/write interface. To automatically acquire this knowledge, we introduce a tool (EOF) that can dis- cover le-system structure for certain types of le systems, and then show how an SDS can exploit this knowledge on-line to understand le-system behavior. We quantify the space and time overheads that are common in an SDS, showing that they are not excessive. We then study the issues surrounding SDS construc- tion by designing and implementing a number of prototypes as case studies; each case study exploits knowledge of some aspect of the le system to implement powerful functionality beneath the standard SCSI interface. Overall, we nd that a surprising amount of functionality can be embedded within an SDS, hinting at a future where disk manufacturers can compete on enhanced functionality and not simply cost-per-byte and performance.
Striped Tape Arrays A growing number of applications require high capacity, high throughput tertiary storage systems. We are investigating how data striping ideas apply to arrays of magnetic tape drives. Data striping increases throughput and reduces response time for large accesses to a storage system. Striped magnetic tape systems are particularly appealing because many inexpensive magnetic tape drives have low bandwidth; striping may offer dramatic performance improvements for these systems. There are several important issues in designing striped tape systems: the choice of tape drives and robots, whether to stripe within or between robots, and the choice of the best scheme for distributing data on cartridges. One of the most troublesome problems in striped tape arrays is the synchronization of transfers across tape drives. Another issue is how improved devices will affect the desirability of striping in the future. We present the results of simulations comparing the performance of striped tape systems to non-striped systems.
Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you? Component failure in large-scale IT installations is becoming an ever larger problem as the number of components in a single cluster approaches a million. In this paper, we present and analyze field-gathered disk replacement data from a number of large production systems, including high-performance computing sites and internet services sites. About 100,000 disks are covered by this data, some for an entire lifetime of five years. The data include drives with SCSI and FC, as well as SATA interfaces. The mean time to failure (MTTF) of those drives, as specified in their datasheets, ranges from 1,000,000 to 1,500,000 hours, suggesting a nominal annual failure rate of at most 0.88%. We find that in the field, annual disk replacement rates typically exceed 1%, with 2-4% common and up to 13% observed on some systems. This suggests that field replacement is a fairly different process than one might predict based on datasheet MTTF. We also find evidence, based on records of disk replacements in the field, that failure rate is not constant with age, and that, rather than a significant infant mortality effect, we see a significant early onset of wearout degradation. That is, replacement rates in our data grew constantly with age, an effect often assumed not to set in until after a nominal lifetime of 5 years. Interestingly, we observe little difference in replacement rates between SCSI, FC and SATA drives, potentially an indication that disk-independent factors, such as operating conditions, affect replacement rates more than component specific factors. On the other hand, we see only one instance of a customer rejecting an entire population of disks as a bad batch, in this case because of media error rates, and this instance involved SATA disks. Time between replacement, a proxy for time between failure, is not well modeled by an exponential distribution and exhibits significant levels of correlation, including autocorrelation and long-range dependence.
Global reinforcement learning in neural networks. In this letter, we have found a more general formulation of the REward Increment = Nonnegative Factor x Offset Reinforcement x Characteristic Eligibility (REINFORCE) learning principle first suggested by Williams. The new formulation has enabled us to apply the principle to global reinforcement learning in networks with various sources of randomness, and to suggest several simple local rules for such networks. Numerical simulations have shown that for simple classification and reinforcement learning tasks, at least one family of the new learning rules gives results comparable to those provided by the famous Rules A(r-i) and A(r-p) for the Boltzmann machines.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.07625
0.1
0.1
0.1
0.066667
0.009167
0.000027
0.000003
0
0
0
0
0
0