Query Text
stringlengths
9
8.71k
Ranking 1
stringlengths
14
5.31k
Ranking 2
stringlengths
11
5.31k
Ranking 3
stringlengths
11
8.42k
Ranking 4
stringlengths
17
8.71k
Ranking 5
stringlengths
14
4.95k
Ranking 6
stringlengths
14
8.42k
Ranking 7
stringlengths
17
8.42k
Ranking 8
stringlengths
10
5.31k
Ranking 9
stringlengths
9
8.42k
Ranking 10
stringlengths
9
8.42k
Ranking 11
stringlengths
10
4.11k
Ranking 12
stringlengths
14
8.33k
Ranking 13
stringlengths
17
3.82k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.24
score_4
float64
0
0.24
score_5
float64
0
0.24
score_6
float64
0
0.21
score_7
float64
0
0.1
score_8
float64
0
0.02
score_9
float64
0
0
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
GTI at TASS 2016: Supervised Approach for Aspect Based Sentiment Analysis in Twitter.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
AI-Based Classification to Facilitate Preservation of British Columbia Endangered Birds Species This study used artificial intelligence to identify endangered birds in British Columbia, Canada, to help preserve their habitats. Over many years of industrialization by humankind, it has become significantly more challenging for animals to live in a shrinking natural habitat. We used AI to assist in protecting these animals by quickly recognizing whether or not they are contained within an image of a forest. In doing so, users can take many photographs of the interior of a forest and attempt to see if an endangered species lives in the area.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Metaobject Protocol for Controlling File Cache Management This paper presents the design of a metaobject protocol (MOP) for controlling file buffercaches in operating systems. The MOP exposes an abstraction of the file cache machinerythat is an inherent part of every file system implementation, and thereby allows applicationsto control cache management decisions for the files that they use. Safety and protection arepreserved by carefully designing the MOP so that the operating system retains control bothover which applications may access...
A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Logic programs with classical negation
The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz.
Solving Advanced Reasoning Tasks Using Quantified Boolean Formulas We consider the compilation of different reasoning tasks into the evaluation problem of quantified boolean formulas (QBFs) as an approach to develop prototype reasoning sys- tems useful for, e.g., experimental purposes. Such a method is a natural generalization of a similar technique applied to NP-problems and has been recently proposed by other re- searchers. More specifically, we present translations of sev- eral well-known reasoning tasks from the area of nonmono- tonic reasoning into QBFs, and compare their implementa- tion in the prototype system QUIP with established NMR- provers. The results show reasonable performance, and docu- ment that the QBF approach is an attractive tool for rapid pro- totyping of experimental knowledge-representation systems.
Object Recognition from Local Scale-Invariant Features An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection.These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales.The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.000219
0
0
0
0
0
0
0
0
0
0
0
0
Soft sensor based on stacked auto-encoder deep neural network for air preheater rotor deformation prediction. Soft sensors have been widely used in industrial processes over the past two decades because they use easy-to-measure process variables to predict difficult-to-measure ones. Some success has been achieved by the dominant traditional methods of modeling soft sensors based on statistics, such as principal components analysis (PCA) and partial least square (PLS), but such sensors usually become inaccurate and inefficient when processing strong nonlinear data. In this paper, a new soft sensor modeling approach is proposed based on a deep learning network. First, stacked auto-encoders (SAEs) are employed to extract high-level feature representations of the input data. In the process of training each layer of a SAE, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS) is adopted to optimize the weights parameters. Then, a support vector regression (SVR) is added to predict the target value on the basis of the features obtained from the SAE. To improve the model performance, Genetic Algorithm (GA) is used to obtain the optimal parameters of the SVR. To evaluate the proposed method, a soft sensor model for estimating the rotor deformation of air preheaters in a thermal power plant boiler is studied. The experimental results demonstrate that the soft sensor based on the SAE-SVR algorithm is more effective than the existing methods are.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks. In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks (LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods.
On-line deep learning method for action recognition. In this paper an unsupervised on-line deep learning algorithm for action recognition in video sequences is proposed. Deep learning models capable of deriving spatio-temporal data have been proposed in the past with remarkable results, yet, they are mostly restricted to building features from a short window length. The model presented here, on the other hand, considers the entire sample sequence and extracts the description in a frame-by-frame manner. Each computational node of the proposed paradigm forms clusters and computes point representatives, respectively. Subsequently, a first-order transition matrix stores and continuously updates the successive transitions among the clusters. Both the spatial and temporal information are concurrently treated by the Viterbi Algorithm, which maximizes a criterion based upon (a) the temporal transitions and (b) the similarity of the respective input sequence with the cluster representatives. The derived Viterbi path is the node’s output, whereas the concatenation of nine vicinal such paths constitute the input to the corresponding upper level node. The engagement of ART and the Viterbi Algorithm in a Deep learning architecture, here, for the first time, leads to a substantially different approach for action recognition. Compared with other deep learning methodologies, in most cases, it is shown to outperform them, in terms of classification accuracy.
A Framework For Selecting Deep Learning Hyper-Parameters Recent research has found that deep learning architectures show significant improvements over traditional shallow algorithms when mining high dimensional datasets. When the choice of algorithm employed, hyper-parameter setting, number of hidden layers and nodes within a layer are combined, the identification of an optimal configuration can be a lengthy process. Our work provides a framework for building deep learning architectures via a stepwise approach, together with an evaluation methodology to quickly identify poorly performing architectural configurations. Using a dataset with high dimensionality, we illustrate how different architectures perform and how one algorithm configuration can provide input for fine-tuning more complex models.
Towards unsupervised physical activity recognition using smartphone accelerometers The development of smartphones equipped with accelerometers gives a promising way for researchers to accurately recognize an individual's physical activity in order to better understand the relationship between physical activity and health. However, a huge challenge for such sensor-based activity recognition task is the collection of annotated or labelled training data. In this work, we employ an unsupervised method for recognizing physical activities using smartphone accelerometers. Features are extracted from the raw acceleration data collected by smartphones, then an unsupervised classification method called MCODE is used for activity recognition. We evaluate the effectiveness of our method on three real-world datasets, i.e., a public dataset of daily living activities and two datasets of sports activities of race walking and basketball playing collected by ourselves, and we find our method outperforms other existing methods. The results show that our method is viable to recognize physical activities using smartphone accelerometers.
Coarse-To-Fine Auto-Encoder Networks (Cfan) For Real-Time Face Alignment Accurate face alignment is a vital prerequisite step for most face perception tasks such as face recognition, facial expression analysis and non-realistic face re-rendering. It can be formulated as the nonlinear inference of the facial landmarks from the detected face region. Deep network seems a good choice to model the nonlinearity, but it is nontrivial to apply it directly. In this paper, instead of a straightforward application of deep network, we propose a Coarse-to-Fine Auto-encoder Networks (CFAN) approach, which cascades a few successive Stacked Auto-encoder Networks (SANs). Specifically, the first SAN predicts the landmarks quickly but accurately enough as a preliminary, by taking as input a low-resolution version of the detected face holistically. The following SANs then progressively refine the landmark by taking as input the local features extracted around the current landmarks (output of the previous SAN) with higher and higher resolution. Extensive experiments conducted on three challenging datasets demonstrate that our CFAN outperforms the state-of-the-art methods and performs in real-time(40+fps excluding face detection on a desktop).
Two-layer contractive encodings for learning stable nonlinear features. Unsupervised learning of feature hierarchies is often a good strategy to initialize deep architectures for supervised learning. Most existing deep learning methods build these feature hierarchies layer by layer in a greedy fashion using either auto-encoders or restricted Boltzmann machines. Both yield encoders which compute linear projections of input followed by a smooth thresholding function. In this work, we demonstrate that these encoders fail to find stable features when the required computation is in the exclusive-or class. To overcome this limitation, we propose a two-layer encoder which is less restricted in the type of features it can learn. The proposed encoder is regularized by an extension of previous work on contractive regularization. This proposed two-layer contractive encoder potentially poses a more difficult optimization problem, and we further propose to linearly transform hidden neurons of the encoder to make learning easier. We demonstrate the advantages of the two-layer encoders qualitatively on artificially constructed datasets as well as commonly used benchmark datasets. We also conduct experiments on a semi-supervised learning task and show the benefits of the proposed two-layer encoders trained with the linear transformation of perceptrons.
Deep learning for healthcare decision making with EMRs Computer aid technology is widely applied in decision-making and outcome assessment of healthcare delivery, in which modeling knowledge and expert experience is technically important. However, the conventional rule-based models are incapable of capturing the underlying knowledge because they are incapable of simulating the complexity of human brains and highly rely on feature representation of problem domains. Thus we attempt to apply a deep model to overcome this weakness. The deep model can simulate the thinking procedure of human and combine feature representation and learning in a unified model. A modified version of convolutional deep belief networks is used as an effective training method for large-scale data sets. Then it is tested by two instances: a dataset on hypertension retrieved from a HIS system, and a dataset on Chinese medical diagnosis and treatment prescription from a manual converted electronic medical record (EMR) database. The experimental results indicate that the proposed deep model is able to reveal previously unknown concepts and performs much better than the conventional shallow models.
Learning Features from Music Audio with Deep Belief Networks.
Why Does Unsupervised Pre-training Help Deep Learning? Much recent research has been devoted to learning algorithms for deep architectures such as Deep Belief Networks and stacks of auto-encoder variants, with impressive results obtained in several areas, mostly on vision and language data sets. The best results obtained on supervised learning tasks involve an unsupervised learning component, usually in an unsupervised pre-training phase. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. The main quest ion investigated here is the following: how does unsupervised pre-training work? Answering this questions is important if learning in deep architectures is to be further improved. We propose several explanatory hypotheses and test them through extensive simulations. We empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples. The experiments confirm and clarify the advantage of unsupervised pre-training. The results suggest that unsupervised pre- training guides the learning towards basins of attraction o f minima that support better generalization from the training data set; the evidence from these results s upports a regularization explanation for the effect of pre-training.
An Introduction to Least Commitment Planning Recent developments have clarified the process of generating partially ordered, partially specified sequences of actions whose execution will achieve an agent's goal. This article summarizes a progression of least commitment planners, starting with one that handles the simple STRIPS representation and ending with UCOPOP a planner that manages actions with disjunctive precondition, conditional effects, and universal quantification over dynamic universes. Along the way, I explain how Chapman's formulation of the modal truth criterion is misleading and why his NP-completeness result for reasoning about plans with conditional effects does not apply to UCOPOP.
RSN1-tt(NP) Distinguishes Robust Many-One and Turing Completeness Do complexity classes have many-one complete sets if and only if they have Turing- complete sets? We prove that there is a relativized world in which a relatively natural complexity class—namely a downward closure of NP, RSN1-tt(NP)—has Turing-complete sets but has no many-one complete sets. In fact, we show that in the same relativized world this class has 2-truth-table complete sets but lacks 1-truth-table complete sets. As part of the groundwork for our result, we prove that RSN1-tt(NP) has many equivalent forms having to do with ordered and parallel access to NP and NP ∩ coNP.
On Active Deductive Databases: The Statelog Approach After briefly reviewing the basic notions and terminology of active rules and relating them to production rules and deductive rules, respectively, we survey a number of formal approaches to active rules. Subsequently, we present our own state-oriented logical approach to active rules which combines the declarative semantics of deductive rules with the possibility to define updates in the style of production rules and active rules. The resulting language Statelog is surprisingly simple, yet...
On the relations between stable and well-founded semantics of logic programs We study the relations between stable and well-founded semantics of logic programs. 1. We show that stable semantics can be defined in the same way as well-founded semantics based on the basic notion of unfounded sets. Hence, stable semantics can be considered as “two-valued well-founded semantics”. 2. An axiomatic characterization of stable and well-founded semantics of logic programs is given by a new completion theory, called strong completion . Similar to the Clark's completion, the strong completion can be interpreted in either two-valued or three-valued logic. We show that ◦ Two-valued strong completion specifies the stable semantics. ◦ Three-valued strong completion specifies the well-founded semantics. 3. We study the equivalence between stable semantics and well-founded semantics. At first, we prove the equivalence between the two semantics for strict programs. Then we introduce the bottom-stratified and top-strict condition generalizing both the stratifiability and the strictness, and show that the new condition is sufficient for the equivalence between stable and well-founded semantics. Further, we show that the call-consistency condition is sufficient for the existence of at least one stable model.
Integrating representation learning and skill learning in a human-like intelligent agent. Building an intelligent agent that simulates human learning of math and science could potentially benefit both cognitive science, by contributing to the understanding of human learning, and artificial intelligence, by advancing the goal of creating human-level intelligence. However, constructing such a learning agent currently requires manual encoding of prior domain knowledge; in addition to being a poor model of human acquisition of prior knowledge, manual knowledge-encoding is both time-consuming and error-prone. Previous research has shown that one of the key factors that differentiates experts and novices is their different representations of knowledge. Experts view the world in terms of deep functional features, while novices view it in terms of shallow perceptual features. Moreover, since the performance of learning algorithms is sensitive to representation, the deep features are also important in achieving effective machine learning. In this paper, we present an efficient algorithm that acquires representation knowledge in the form of “deep features”, and demonstrate its effectiveness in the domain of algebra as well as synthetic domains. We integrate this algorithm into a machine-learning agent, SimStudent, which learns procedural knowledge by observing a tutor solve sample problems, and by getting feedback while actively solving problems on its own. We show that learning “deep features” reduces the requirements for knowledge engineering. Moreover, we propose an approach that automatically discovers student models using the extended SimStudent. By fitting the discovered model to real student learning curve data, we show that it is a better student model than human-generated models, and demonstrate how the discovered model may be used to improve a tutoring system's instructional strategy.
1.05125
0.05
0.05
0.025
0.016667
0.01
0.001
0.000123
0.000018
0
0
0
0
0
Pseudo random sequence over finite field using Möbius Function Pseudo random sequences play an important role in cryptography and network security system. This paper proposes a new approach for generation of pseudo random sequence over odd characteristic field. The sequence is generated by applying a primitive polynomial over odd characteristic field, trace function and möbius function. Then, some important properties of the newly generated sequence such as period, autocorrelation and cross-correlation have been studied in this work. The properties of the generated sequence are evaluated on various bit length of odd characteristics. Finally, the experimental results are compared with existing works which show the superiority of the proposed sequence over existing ones.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Discriminative Feature Learning for Action Recognition Using a Stacked Denoising Autoencoder
Building feature space of extreme learning machine with sparse denoising stacked-autoencoder. The random-hidden-node extreme learning machine (ELM) is a much more generalized cluster of single-hidden-layer feed-forward neural networks (SLFNs) which has three parts: random projection, non-linear transformation, and ridge regression (RR) model. Networks with deep architectures have demonstrated state-of-the-art performance in a variety of settings, especially with computer vision tasks. Deep learning algorithms such as stacked autoencoder (SAE) and deep belief network (DBN) are built on learning several levels of representation of the input. Beyond simply learning features by stacking autoencoders (AE), there is a need for increasing its robustness to noise and reinforcing the sparsity of weights to make it easier to discover interesting and prominent features. The sparse AE and denoising AE was hence developed for this purpose. This paper proposes an approach: SSDAE-RR (stacked sparse denoising autoencoder – ridge regression) that effectively integrates the advantages in SAE, sparse AE, denoising AE, and the RR implementation in ELM algorithm. We conducted experimental study on real-world classification (binary and multiclass) and regression problems with different scales among several relevant approaches: SSDAE-RR, ELM, DBN, neural network (NN), and SAE. The performance analysis shows that the SSDAE-RR tends to achieve a better generalization ability on relatively large datasets (large sample size and high dimension) that were not pre-processed for feature abstraction. For 16 out of 18 tested datasets, the performance of SSDAE-RR is more stable than other tested approaches. We also note that the sparsity regularization and denoising mechanism seem to be mandatory for constructing interpretable feature representations. The fact that a SSDAE-RR approach often has a comparable training time to ELM makes it useful in some real applications.
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.
Learning Deep Architectures for AI Theoretical results suggest that in order to learn the kind of com-plicated functions that can represent high-level abstractions (e.g., invision, language, and other AI-level tasks), one may needdeep architec-tures. Deep architectures are composed of multiple levels of non-linearoperations, such as in neural nets with many hidden layers or in com-plicated propositional formulae re-using many sub-formulae. Searchingthe parameter space of deep architectures is a difficult task, but learningalgorithms such as those for Deep Belief Networks have recently beenproposed to tackle this problem with notable success, beating the state-of-the-art in certain areas. This monograph discusses the motivationsand principles regarding learning algorithms for deep architectures, inparticular those exploiting as building blocks unsupervised learning ofsingle-layer models such as Restricted Boltzmann Machines, used toconstruct deeper models such as Deep Belief Networks.
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days.
Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented.
Indexing By Latent Semantic Analysis
Disk Shadowing Disk shadowing is a technique for maintaining a set of two or more identical disk images on separate disk devices. Its primary purpose is to enhance reliability and availability of secondary storage by providing multiple paths to redundant data. However, shadowing can also boost I/O performance. In this paper, we contend that intelligent device scheduling of shadowed discs increases the I/O rate by allowing parallel reads and by substantially reducing the average seek time for random reads. In particular, we develop and analytic model which shows that the seek time for a random read in a shadow set is a monotonic decreasing function of the number of disks.
A Completeness Result for SLDNF-Resolution Because of the possibility of floundering and infinite derivations, SLDNF-resolution is, in general, not complete. The classical approach [17] to get a completeness result is to restrict the attention to normal programs P and normal goals G, such that P or {G} is allowed and P is hierarchical. Unfortunately, the class of all normal programs and all normal goals meeting these requirements is not powerful enough to be of great practical importance. But after refining the concept of allowedness by taking modes [12] into account, we can broaden the notion of a hierarchical program, and thereby define a subclass of the class of normal programs and normal goals which is powerful enough to compute all primitive recursive functions without losing the completeness of SLDNF-resolution.
A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system.
iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings.
When Multivariate Forecasting Meets Unsupervised Feature Learning - Towards a Novel Anomaly Detection Framework for Decision Support. Many organizations adopt information technologies to make intelligent decisions during operations. Time-series data plays a crucial role in supporting such decision making processes. Though current studies on time-series based decision making provide reasonably well results, the anomaly detection essence underling most of the scenarios and the plenitude of unlabeled data are largely overlooked and left unexplored. We argue that by using multivariate forecasting and unsupervised feature learning, these two important research gaps could be filled. We carried out two experiments in this study to testify our approach and the results showed that decision support performance was significantly improved. We also proposed a novel framework to integrate the two methods so that our approach may be generalized to a larger problem domain. We discussed the advantages, the limitations and the future work of our study. Both practical and theoretical contributions were also discussed in the paper. © 2012 by the AIS/ICIS Administrative Office All rights reserved.
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
1.1
0.025
0.00042
0.000098
0
0
0
0
0
0
0
0
0
0
Energy Efficient Buffer Cache Replacement for Data Servers Power consumption is an increasingly impressing concern for data servers as it directly affects running costs and system reliability. Prior studies have shown that most memory space on data servers is used for buffer caching and thus cache replacement becomes critical. Two conflicting factors of buffer caching impacts memory energy efficiency: (1) a higher hit rate reduces memory traffic and thus saves energy, (2) temporally concentrating memory accesses to a smaller set of memory chips increases the chances of "free riding" through DMA overlapping and also makes more memory chips have opportunities to power down. This paper investigates the tradeoff between these two interacting, sometimes conflicting factors and proposes three energy-aware buffer cache replacement algorithms: On a cache miss for a new block b in a file f, evict an victim block from (1)the most recently accessed memory chip, (2) the memory chip that is accessed most recently by file f, or (3) the memory chip that is accessed most recently by file f and whose last access block belongs to the same hot or cold categories as block b. Simulation results based on three real-world I/O traces, including TPC-R, MSN-BEFS and Exchange, show that our algorithms can save up to 24.9% energy with marginal degradation in hit rates. Our algorithms show degradation in response time in some experiments. We propose an off-line energy sub optimal replacement algorithm that serves as a theortical reference.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Think Global, Act Local: A Buffer Cache Design for Global Ordering and Parallel Processing in the WAFL File System Given the enormous disparity in access speeds between main memory and storage media, modern storage servers must leverage highly effective buffer cache policies to meet demanding performance requirements. At the same time, these page replacement policies need to scale efficiently with ever-increasing core counts and memory sizes, which necessitate parallel buffer cache management. However, these requirements of effectiveness and scalability are at odds, because centralized processing does not scale with more processors and parallel policies are a challenge to implement with maximum effectiveness. We have overcome this difficulty in the NetApp Data ONTAP WAFL file system by using a sophisticated technique to simultaneously allow global buffer prioritization while providing parallel management operations. In addition, we have extended the buffer cache to provide a soft isolation of different workloads' buffer cache usage, which is akin to buffer cache quality of server (QoS). This paper presents the design and implementation of these significant extensions in the buffer cache of a high-performance commercial file system.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
EpiGPU: exhaustive pairwise epistasis scans parallelized on consumer level graphics cards. Motivation: Hundreds of genome-wide association studies have been performed over the last decade, but as single nucleotide polymorphism ( SNP) chip density has increased so has the computational burden to search for epistasis [ for n SNPs the computational time resource is O(n(n-1)/2)]. While the theoretical contribution of epistasis toward phenotypes of medical and economic importance is widely discussed, empirical evidence is conspicuously absent because its analysis is often computationally prohibitive. To facilitate resolution in this field, tools must be made available that can render the search for epistasis universally viable in terms of hardware availability, cost and computational time. Results: By partitioning the 2D search grid across the multicore architecture of a modern consumer graphics processing unit (GPU), we report a 92x increase in the speed of an exhaustive pairwise epistasis scan for a quantitative phenotype, and we expect the speed to increase as graphics cards continue to improve. To achieve a comparable computational improvement without a graphics card would require a large compute-cluster, an option that is often financially non-viable. The implementation presented uses OpenCL-an open-source library designed to run on any commercially available GPU and on any operating system.
An efficient algorithm to perform multiple testing in epistasis screening. Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn's disease.In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn's disease (CD) data.Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn's disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.
High-speed exhaustive 3-locus interaction epistasis analysis on FPGAs Epistasis, the interaction between genes, has become a major topic in molecular and quantitative genetics. It is believed that these interactions play a significant role in genetic variations causing complex diseases. Several algorithms have been employed to detect pairwise interactions in genome-wide association studies (GWAS) but revealing higher order interactions remains a computationally challenging task. State of the art tools are not able to perform exhaustive search for all three-locus interactions in reasonable time even for relatively small input datasets. In this paper we present how a hardware-assisted design can solve this problem and provide fast, efficient and exhaustive third-order epistasis analysis with up-to-date FPGA technology. (C) 2015 Elsevier B.V. All rights reserved.
Improvement of BLASTp on the FPGA-Based high-performance computer RIVYERA NCBI BLASTp plays the major role of protein database searches already for years. However, with today's growth of sequence database sizes, it becomes more inefficient with standard PC architectures. One solution to address this problem was already presented in our previous implementation, published in [16], taking advantages of the massive parallelization provided by the FPGA-based high-performance computer RIVYERA [3]. The analysis of bottlenecks in our BLASTp pipeline showed the urgent need to speed up the two-hit finder component, as well as the postprocessing on the PC. After a complete redesign of the two-hit finder and the insertion of a new "gapped extension" filter, we achieve a speedup of up to 376, compared to one thread of a fully utilized 2x Intel Xeon E5520 PC system at $2.26\ensuremath{\mathrm{GHz}} $ running original NCBI BLASTp v. 2.2.25+. This is about two times the performance of our previous implementation.
FPGA-based Acceleration of Detecting Statistical Epistasis in GWAS. Genotype-by-genotype interactions (epistasis) are believed to be a significant source of unexplained genetic variation causing complex chronic diseases but have been ignored in genome-wide association studies (GWAS) due to the computational burden of analysis. In this work we show how to benefit from FPGA technology for highly parallel creation of contingency tables in a systolic chain with a subsequent statistical test. We present the implementation for the FPGA-based hardware platform RIVYERA S6-LX150 containing 128 Xilinx Spartan6-LX150 FPGAs. For performance evaluation we compare against the method iLOCi[9]. iLOCi claims to outperform other available tools in terms of accuracy. However, analysis of a dataset from the Wellcome Trust Case Control Consortium (WTCCC) with about 500,000 SNPs and 5,000 samples still takes about 19hours on a MacPro workstation with two Intel Xeon quad-core CPUs, while our FPGA-based implementation requires only 4minutes.
Random Forests Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, ***, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.
Bioinformatics Research and Applications: 8th International Symposium, ISBRA 2012, Dallas, TX, USA, May 21-23, 2012. Proceedings
Object Recognition from Local Scale-Invariant Features An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection.These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales.The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds.
ConGolog, a concurrent programming language based on the situation calculus As an alternative to planning, an approach to high-level agent control based onconcurrent program execution is considered. A formal definition in the situationcalculus of such a programming language is presented and illustrated with someexamples. The language includes facilities for prioritizing the execution of concurrentprocesses, interrupting the execution when certain conditions become true,and dealing with exogenous actions. The language differs from other procedural formalismsfor...
An Introduction to MCMC for Machine Learning This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.
A continuum of disk scheduling algorithms A continuum of disk scheduling algorithms, V(R), having endpoints V(0) = SSTF and V(1) = SCAN, is defined. V(R) maintains a current SCAN direction (in or out) and services next the request with the smallest effective distance. The effective distance of a request that lies in the current direction is its physical distance (in cylinders) from the read/write head. The effective distance of a request in the opposite direction is its physical distance plus R x (total number of cylinders on the disk). By use of simulation methods, it is shown that this definitional continuum also provides a continuum in performance, both with respect to the mean and with respect to the standard deviation of request waiting time. For objective functions that are linear combinations of the two measures, &mgr;w + kow, intermediate points of the continuum are seen to provide performance uniformly superior to both SSTF and SCAN. A method of implementing V(R) and the results of its experimental use in a real system are presented.
Computational Politics: Electoral Systems This paper discusses three computation-related results in the study of electoral systems: 1. Determining the winner in Lewis Carroll's 1876 electoral system is complete for parallel access to NP [22]. 2. For any electoral system that is neutral, consistent, and Condorcet, determining the winner is complete for parallel access to NP [21]. 3. For each census in US history, a simulated annealing algorithm yields provably fairer (in a mathematically rigorous sense) congressional apportionments than any of the classic algorithms--even the algorithm currently used in the United States [24].
Adaptive placement of method executions within a customizable distributed object-based runtime system: design, implementation and performance Abstract: This paper presents the design and implementation of a mechanism aimed at enhancing the performance of distributed object-based applications. This goal is achieved by means of a new algorithm implementing placement of method executions that adapts to processors' load and to objects' characteristics, the latter allowing to approximate the cost of methods' re-mote execution The behavior of the proposed placement algorithm is examined by providing performance measures obtained from its integration within a customizable distributed object-based runtime system. In particular, the cost of method executions using our algorithm is compared with the cost resulting from the standard placement technique that consists of executing any method on the storing node of its embedding object.
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
1.052148
0.051241
0.05
0.026241
0.019228
0.000077
0.000002
0
0
0
0
0
0
0
An expandable parallel file system using NFS servers This paper describes a new parallel file system, called Expand (Expandable Parallel File System)1, that is based on NFS servers. Expand allows the transparent use of multiple NFS servers as a single file system. The different NFS servers are combined to create a distributed partition where files are declustered. Expand requires no changes to the NFS server and uses RPC operations to provide parallel access to the same file. Expand is also independent of the clients, because all operations are implemented using RPC and NFS protocol. Using this system, we can join heterogeneous servers (Linux, Solaris, Windows 2000, etc.) to provide a parallel and distributed partition. The paper describes the design of Expand and the evaluation of a prototype of Expand. This evaluation has been made in Linux clusters and compares Expand, NFS and PVFS.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Blind Image Quality Assessment via Deep Learning. This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, ...
Non-Local Auto-Encoder With Collaborative Stabilization for Image Restoration. Deep neural networks have been applied to image restoration to achieve the top-level performance. From a neuroscience perspective, the layerwise abstraction of knowledge in a deep neural network can, to some extent, reveal the mechanisms of how visual cues are processed in human brain. A pivotal property of human brain is that similar visual cues can stimulate the same neuron to induce similar neurological signals. However, conventional neural networks do not consider this property, and the resulting models are, as a result, unstable regarding their internal propagation. In this paper, we develop the (stacked) non-local auto-encoder, which exploits self-similar information in natural images for stability. We propose that similar inputs should induce similar network propagation. This is achieved by constraining the difference between the hidden representations of non-local similar image blocks during training. By applying the proposed model to image restoration, we then develop a collaborative stabilization step to further rectify forward propagation. To obtain a reliable deep model, we employ several strategies to simplify training and improve testing. Extensive image restoration experiments, including image denoising and super-resolution, demonstrate the effectiveness of the proposed method.
Coupled Deep Autoencoder for Single Image Super-Resolution. Sparse coding has been widely applied to learning-based single image super-resolution (SR) and has obtained promising performance by jointly learning effective representations for low-resolution (LR) and high-resolution (HR) image patch pairs. However, the resulting HR images often suffer from ringing, jaggy, and blurring artifacts due to the strong yet ad hoc assumptions that the LR image patch r...
Deep and Shallow Architecture of Multilayer Neural Networks This paper focuses on the deep and shallow architecture of multilayer neural networks (MNNs). The demonstration of whether or not an MNN can be replaced by another MNN with fewer layers is equivalent to studying the topological conjugacy of its hidden layers. This paper provides a systematic methodology to indicate when two hidden spaces are topologically conjugated. Furthermore, some criteria are presented for some specific cases.
On the Complexity of Neural Network Classifiers: A Comparison Between Shallow and Deep Architectures Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.
A framework for mining signatures from event sequences and its applications in healthcare data. This paper proposes a novel temporal knowledge representation and learning framework to perform large-scale temporal signature mining of longitudinal heterogeneous event data. The framework enables the representation, extraction, and mining of high-order latent event structure and relationships within single and multiple event sequences. The proposed knowledge representation maps the heterogeneous event sequences to a geometric image by encoding events as a structured spatial-temporal shape process. We present a doubly constrained convolutional sparse coding framework that learns interpretable and shift-invariant latent temporal event signatures. We show how to cope with the sparsity in the data as well as in the latent factor model by inducing a double sparsity constraint on the β-divergence to learn an overcomplete sparse latent factor model. A novel stochastic optimization scheme performs large-scale incremental learning of group-specific temporal event signatures. We validate the framework on synthetic data and on an electronic health record dataset.
A fast learning algorithm for deep belief nets. We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.
A Simple Weight Decay Can Improve Generalization It has been observed in numerical simulations that a weight decay can im(cid:173) prove generalization in a feed-forward neural network. This paper explains why. It is proven that a weight decay has two effects in a linear network. First, it suppresses any irrelevant components of the weight vector by choosing the smallest vector that solves the learning problem. Second, if the size is chosen right, a weight decay can suppress some of the effects of static noise on the targets, which improves generalization quite a lot. It is then shown how to extend these results to networks with hidden layers and non-linear units. Finally the theory is confirmed by some numerical simulations using the data from NetTalk.
Classification using discriminative restricted Boltzmann machines Recently, many applications for Restricted Boltzmann Machines (RBMs) have been developed for a large variety of learning problems. However, RBMs are usually used as feature extractors for another learning algorithm or to provide a good initialization for deep feed-forward neural network classifiers, and are not considered as a standalone solution to classification problems. In this paper, we argue that RBMs provide a self-contained framework for deriving competitive non-linear classifiers. We present an evaluation of different learning algorithms for RBMs which aim at introducing a discriminative component to RBM training and improve their performance as classifiers. This approach is simple in that RBMs are used directly to build a classifier, rather than as a stepping stone. Finally, we demonstrate how discriminative RBMs can also be successfully employed in a semi-supervised setting.
Trace driven analysis of write caching policies for disks The I/O subsystem in a computer system is becoming the bottleneck as a result of recent dramatic improvements in processor speeds. Disk caches have been effective in closing this gap but the benefit is restricted to the read operations as the write I/Os are usually committed to disk to maintain consistency and to allow for crash recovery. As a result, write I/O traffic is becoming dominant and solutions to alleviate this problem are becoming increasingly important. A simple solution which can easily work with existing tile systems is to use non-volatile disk caches together with a write-behind strategy. In this study, we look at the issues around managing such a cache using a detailed trace driven simulation. Traces from three different commercial sites are used in the analysis of various policies for managing the write cache.We observe that even a simple write-behind policy for the write cache is effective in reducing the total number of writes by over 50%. We further observe that the use of hysteresis in the policy to purge the write cache, with two thresholds, yields substantial improvement over a single threshold scheme. The inclusion of a mechanism to piggyback blocks from the write cache with read miss I/Os further reduces the number of writes to only about 15% of the original total number of write operations. We compare two piggybacking options and also study the impact of varying the write cache size. We briefly looked at the case of a single non-volatile disk cache to estimate the performance impact of statically partitioning the cache for reads and writes.
Fault tolerant design of multimedia servers Recent technological advances have made multimedia on-demand servers feasible. Two challenging tasks in such systems are: a) satisfying the real-time requirement for continuous delivery of objects at specified bandwidths and b) efficiently servicing multiple clients simultaneously. To accomplish these tasks and realize economies of scale associated with servicing a large user population, the multimedia server can require a large disk subsystem. Although a single disk is fairly reliable, a large disk farm can have an unacceptably high probability of disk failure. Further, due to the real-time constraint, the reliability and availability requirements of multimedia systems are very stringent. In this paper we investigate techniques for providing a high degree of reliability and availability, at low disk storage, bandwidth, and memory costs for on-demand multimedia servers.
File system design using large memories It is shown using experimental data that file activity is fairly stable over time, and the implications of this finding for file system design are examined. Several file access patterns and how they may be exploited to improve file system performance are shown. In particular, it is shown that current file temperature can be used to predict future file temperature. The design of the iPcress file system, which uses both a large disk cache and other techniques to improve file system performance is outlined. iPcress has a variety of cache staging algorithms and can choose the one most appropriate for each file. iPcress also stores access histories for each file to guide decisions such as file layout on DASD and caching. Preliminary performance figures for iPcress are presented
Relating equivalence and reducibility to sparse sets For various polynomial-time reducibilities r, the authors ask whether being r-reducible to a sparse set is a broader notion than being r-equivalent to a sparse set. Although distinguishing equivalence and reducibility to sparse sets, for many-one or 1-truth-table reductions, would imply that P≠NP, the authors show that for k-truth-table reductions, k⩾2, equivalence and reducibility to sparse sets provably differ. Though R. Gavalda and D. Watanabe have shown that, for any polynomial-time computable unbounded function f(·), some sets f(n)-truth-table reducible to sparse sets are not even Turing equivalent to sparse sets, the authors show that extending their result to the 2-truth-table case would provide a proof that P≠NP. Additionally, the authors study the relative power of different notions of reducibility and show that disjunctive and conjunctive truth-table reductions to sparse sets are surprisingly powerful, refuting a conjecture of K. Ko (1989)
Unsupervised (Parameter) Learning For Mrfs On Bipartite Graphs We consider unsupervised (parameter) learning for general Markov random fields on bipartite graphs. This model class includes Restricted Boltzmann Machines. We show that besides the widely used stochastic gradient approximation (a.k.a. Persistent Contrastive Divergence) there is an alternative learning approach - a modified EM algorithm which is tractable because of the bipartiteness of the model graph. We compare the resulting double loop algorithm and the PCD learning experimentally and show that the former converges faster and more stable than the latter.
1.076444
0.066667
0.022222
0.011111
0.004133
0.000252
0.000111
0.000022
0.000003
0
0
0
0
0
Maximum Entropy Learning with Deep Belief Networks. Conventionally, the maximum likelihood (ML) criterion is applied to train a deep belief network (DBN). We present a maximum entropy (ME) learning algorithm for DBNs, designed specifically to handle limited training data. Maximizing only the entropy of parameters in the DBN allows more effective generalization capability, less bias towards data distributions, and robustness to over-fitting compared to ML learning. Results of text classification and object recognition tasks demonstrate ME-trained DBN outperforms ML-trained DBN when training data is limited.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
PTL-CFS based deep convolutional neural network model for remote sensing classification. Processing high-dimensional remote sensing images data with conventional convolutional neural networks raises certain issues such as prolonged model convergence time, vanishing gradient, convergence of the non-minimum values, etc. due to its high time-complexity and random initialization parameters nature. Aiming at those issues, this article proposes a convolutional neural network remote sensing classification model based on PTL-CFS. This model approach first utilizes parameter transfer learning algorithm to obtain the CNN initialization parameters of the target area, then it uses correlation-based feature selection algorithm to eliminate the redundant features and noises from the original feature set, finally, it classifies the remote sensing images using a conventional CNN model. This article has proven the validity of such network model when classifying remote sensing images in the Zha long wetland, Heilongjiang. Experiments show that the addition of PTL can accelerate the loss function of the convergence rate in CNN. The algorithm combined with CFS algorithm, compared with other algorithms to reduce the algorithm execution time and get better classification accuracy.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Tracking MUSes and Strict Inconsistent Covers In this paper, a new heuristic-based approach is introduced to extract minimally unsatisfiable subformulas (in short, MUSes) of SAT instances. It is shown that it often outperforms current competing methods. Then, the focus is on inconsistent covers, which represent sets of MUSes that cover enough independent sources of infeasibility for the instance to regain satisfiability if they were repaired. As the number of MUSes can be exponential with respect to the size of the instance, it is shown that such a concept is often a viable trade-off since it does not require us to compute all MUSes but provides us with enough mutually independent infeasibility causes that need to be addressed in order to restore satisfiability.
Boosting MUC extraction in unsatisfiable constraint networks One very fertile domain of applied Artificial Intelligence is constraint solving technologies. Especially, constraint networks that concern problems that can be represented using discrete variables, together with constraints on allowed instantiation values for these variables. Every solution to a constraint network must satisfy every constraint. When no solution exists, the user might want to know the actual reasons leading to the absence of global solution. In this respect, extracting mucs (Minimal Unsatisfiable Cores) from an unsatisfiable constraint network is a useful process when causes of unsatisfiability must be understood so that the network can be re-engineered and relaxed to become satisfiable. Despite bad worst-case computational complexity results, various muc-finding approaches that appear tractable for many real-life instances have been proposed. Many of them are based on the successive identification of so-called transition constraints. In this respect, we show how local search can be used to possibly extract additional transition constraints at each main iteration step. In the general constraint networks setting, the approach is shown to outperform a technique based on a form of model rotation imported from the sat-related technology and that also exhibits additional transition constraints. Our extensive computational experimentations show that this enhancement also boosts the performance of state-of-the-art DC(WCORE)-like MUC extractors.
Extracting MUCs from Constraint Networks We address the problem of extracting Minimal Unsatisfiable Cores (MUCs) from constraint networks. This computationally hard problem has a practical interest in many application domains such as configuration, planning, diagnosis, etc. Indeed, identifying one or several disjoint MUCs can help circumscribe different sources of inconsistency in order to repair a system. In this paper, we propose an original approach that involves performing successive runs of a complete backtracking search, using constraint weighting, in order to surround an inconsistent part of a network, before identifying all transition constraints belonging to a MUC using a dichotomic process. We show the effectiveness of this approach, both theoretically and experimentally.
Extracting minimum unsatisfiable cores with a greedy genetic algorithm Explaining the causes of infeasibility of Boolean formulas has practical applications in various fields. We are generally interested in a minimum explanation of infeasibility that excludes irrelevant information. A smallest-cardinality unsatisfiable subset, called a minimum unsatisfiable core, can provide a succinct explanation of infeasibility and is valuable for applications. However little attention has been concentrated on extraction of minimum unsatisfiable cores. In this paper, we propose an efficient greedy genetic algorithm to derive an exact or nearly exact minimum unsatisfiable core. It takes advantage of the relationship between maximal satisfiability and minimum unsatisfiability. We report experimental results on practical benchmarks, as compared with the branch-and-bound algorithm and the ant colony optimization.
Faster extraction of high-level minimal unsatisfiable cores Various verification techniques are based on SAT's capability to identify a small, or even minimal, unsatisfiable core in case the formula is unsatisfiable, i.e., a small subset of the clauses that are unsatisfiable regardless of the rest of the formula. In most cases it is not the core itself that is being used, rather it is processed further in order to check which clauses from a preknown set of Interesting Constraints (where each constraint is modeled with a conjunction of clauses) participate in the proof. The problem of minimizing the participation of interesting constraints was recently coined high-level minimal unsatisfiable core by Nadel [15]. Two prominent examples of verification techniques that need such small cores are 1) abstraction-refinement model-checking techniques, which use the core in order to identify the state variables that will be used for refinement (smaller number of such variables in the core implies that more state variables can be replaced with free inputs in the abstract model), and 2) assumption minimization, where the goal is to minimize the usage of environment assumptions in the proof, because these assumptions have to be proved separately. We propose seven improvements to the recent solution given in [15], which together result in an overall reduction of 55% in run time and 73% in the size of the resulting core, based on our experiments with hundreds of industrial test cases. The optimized procedure is also better empirically than the assumptions-based minimization technique.
A branch-and-bound algorithm for extracting smallest minimal unsatisfiable formulas We tackle the problem of finding a smallest-cardinality MUS (SMUS) of a given formula. The SMUS provides a succinct explanation of infeasibility and is valuable for applications that rely on such explanations. We present a branch-and-bound algorithm that utilizes iterative MAXSAT solutions to generate lower and upper bounds on the size of the SMUS, and branch on specific subformulas to find it. We report experimental results on formulas from DIMACS and DaimlerChrysler product configuration suites.
Using local search to find MSSes and MUSes In this paper, a new complete technique to compute Maximal Satisfiable Subsets (MSSes) and Minimally Unsatisfiable Subformulas (MUSes) of sets of Boolean clauses is introduced. The approach improves the currently most efficient complete technique in several ways. It makes use of the powerful concept of critical clause and of a computationally inexpensive local search oracle to boost an exhaustive algorithm proposed by Liffiton and Sakallah. These features can allow exponential efficiency gains to be obtained. Accordingly, experimental studies show that this new approach outperforms the best current existing exhaustive ones.
Two forms of dependence in propositional logic: controllability and definability We investigate two forms of dependence between variables and/or formulas within a propositional knowledge base: controllability (a set of variables X controls a formula 驴 if there is a way to fix the truth value of the variables in X in order to achieve 驴 to have a prescribed truth value) and definability (X defines a variable y if every truth assignment of the variables in X enables us finding out the truth value of y). Several characterization results are pointed out, complexity issues are analyzed, and some applications of both notions, including decision under incomplete knowledge and/or partial observability, and hypothesis discrimination, are sketched.
The KR System dlv: Progress Report, Comparisons and Benchmarks
Dynamo: amazon's highly available key-value store Reliability at massive scale is one of the biggest challenges we face at Amazon.com, one of the largest e-commerce operations in the world; even the slightest outage has significant financial consequences and impacts customer trust. The Amazon.com platform, which provides services for many web sites worldwide, is implemented on top of an infrastructure of tens of thousands of servers and network components located in many datacenters around the world. At this scale, small and large components fail continuously and the way persistent state is managed in the face of these failures drives the reliability and scalability of the software systems. This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon's core services use to provide an "always-on" experience. To achieve this level of availability, Dynamo sacrifices consistency under certain failure scenarios. It makes extensive use of object versioning and application-assisted conflict resolution in a manner that provides a novel interface for developers to use.
Reasoning about actions with sensing under qualitative and probabilistic uncertainty We focus on the aspect of sensing in reasoning about actions under qualitative and probabilistic uncertainty. We first define the action language E for reasoning about actions with sensing, which has a semantics based on the autoepistemic description logic ALCKNF, and which is given a formal semantics via a system of deterministic transitions between epistemic states. As an important feature, the main computational tasks in E can be done in linear and quadratic time. We then introduce the action language E+ for reasoning about actions with sensing under qualitative and probabilistic uncertainty, which is an extension of E by actions with nondeterministic and probabilistic effects, and which is given a formal semantics in a system of deterministic, nondeterministic, and probabilistic transitions between epistemic states. We also define the notion of a belief graph, which represents the belief state of an agent after a sequence of deterministic, nondeterministic, and probabilistic actions, and which compactly represents a set of unnormalized probability distributions. Using belief graphs, we then introduce the notion of a conditional plan and its goodness for reasoning about actions under qualitative and probabilistic uncertainty. We formulate the problems of optimal and threshold conditional planning under qualitative and probabilistic uncertainty, and show that they are both uncomputable in general. We then give two algorithms for conditional planning in our framework. The first one is always sound, and it is also complete for the special case in which the relevant transitions between epistemic states are cycle-free. The second algorithm is a sound and complete solution to the problem of finite-horizon conditional planning in our framework. Under suitable assumptions, it computes every optimal finite-horizon conditional plan in polynomial time. We also describe an application of our formalism in a robotic-soccer scenario, which underlines its usefulness in realistic applications.
Features for audio and music classification Four audio feature sets are evaluated in their ability to classify five general audio classes and seven pop- ular music genres. The feature sets include low-level signal properties, mel-frequency spectral coefficients, and two new sets based on perceptual models of hear- ing. The temporal behavior of the features is ana- lyzed and parameterized and these parameters are in- cluded as additional features. Using a standard Gaus- sian framework for classification, results show that the temporal behavior of features is important for both music and audio classification. In addition, classifica- tion is better, on average, if based on features from models of auditory perception rather than on standard features.
Global Reinforcement Learning in Neural Networks with Stochastic Synapses We have found a more general formulation of the REINFORCE learning principle which had been proposed by R. J. Williams for the case of artificial neural networks with stochastic cells ("Boltzmann machines"). This formulation has enabled us to apply the principle to global reinforcement learning in networks with deterministic neural cells but stochastic synapses, and to suggest two groups of new learning rules for such networks, including simple local rules. Numerical simulations have shown that at least for several popular benchmark problems one of the new learning rules may provide results on a par with the best known global reinforcement techniques.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.051771
0.05
0.021403
0.01448
0.008461
0.004467
0.00061
0.00001
0
0
0
0
0
0
A Fault-Tolerant Middleware Architecture for High-Availability Storage Services Today organizations and business enterprises of all sizes need to deal with unprecedented amounts of digital information, creating challenging demands for mass storage and on-demand storage services. The current trend of clustered scale-out storage systems use symmetric active replication based clustering middleware to provide continuous availability and high throughput. Such architectures provide significant gains in terms of cost, scalability and performance of mass storage and storage services. However, a fundamental limitation of such an architecture is its vulnerability to application-induced massive dependent failures of the clustering middleware. In this paper, we propose hierarchical middleware architectures that improve availability and reliability in scale-out storage systems while continuing to deliver the cost and performance advantages and a single system image (SSI). Hierarchical middleware architectures organize critical cluster management services into an overlay network that provides application fault isolation and eliminates symmetric clustering middleware as a single-point-of-failure. We present an in-depth evaluation of hierarchical middlewares based on an industry-strength storage system. Our results show that hierarchical architectures can significantly improve availability and reliability of scale-out storage clusters.
The software architecture of a SAN storage control system We describe an architecture of an enterprise-level storage control system that addresses the issues of storage management for storage area network (SAN) -attached block devices in a heterogeneous open systems environment. The storage control system, also referred to as the "storage virtualization engine," is built on a cluster of Linux脗®-based servers, which provides redundancy, modularity, and scalability. We discuss the software architecture of the storage control system and describe its major components: the cluster operating environment, the distributed I/O facilities, the buffer management component, and the hierarchical object pools for managing memory resources. We also describe some preliminary results that indicate the system will achieve its goals of improving the utilization of storage resources, providing a platform for advanced storage functions, using off-the-shelf hardware components and a standard operating system, and facilitating upgrades to new generations of hardware, different hardware platforms, and new storage functions.
The HP AutoRAID hierarchical storage system Configuring redundant disk arrays is a black art. To configure an array properly, a system administrator must understand the details of both the array and the workload it will support. Incorrect understanding of either, or changes in the workload over time, can lead to poor performance. We present a solution to this problem: a two-level storage hierarchy implemented inside a single disk-array controller. In the upper level of this hierarchy, two copies of active data are stored to provide full redundancy and excellent performance. In the lower level, RAID 5 parity protection is used to provide excellent storage cost for inactive data, at somewhat lower performance. The technology we describe in this article, know as HP AutoRAID, automatically and transparently manages migration of data blocks between these two levels as access patterns change. The result is a fully redundant storage system that is extremely easy to use, is suitable for a wide variety of workloads, is largely insensitive to dynamic workload changes, and performs much better than disk arrays with comparable numbers of spindles and much larger amounts of front-end RAM cache. Because the implementation of the HP AutoRAID technology is almost entirely in software, the additional hardware cost for these benefits is very small. We describe the HP AutoRAID technology in detail, provide performance data for an embodiment of it in a storage array, and summarize the results of simulation studies used to choose algorithms implemented in the array.
Background data movement in a log-structured disk subsystem The log-structured disk subsystem is a new concept for the use of disk storage whose future application has enormous potential. In such a subsystem, all writes are organized into a log, each entry of which is placed into the next available free storage. A directory indicates the physical location of each logical object (e.g., each file block or track image) as known to the processor originating the I/O request. For those objects that have been written more than once, the directory retains the location of the most recent copy. Other work with log-structured disk subsystems has shown that they are capable of high write throughputs. However, the fragmentation of free storage due to the scattered locations of data that become out of date can become a problem in sustained operation. To control fragmentation, it is necessary to perform ongoing garbage collection, in which the location of stored data is shifted to release unused storage for re-use. This paper introduces a mathematical model of garbage collection, and shows how collection load relates to the utilization of storage and the amount of locality present in the pattern of updates. A realistic statistical model of updates, based upon trace data analysis, is applied. In addition, alternative policies are examined for determining which data areas to collect. The key conclusion of our analysis is that in environments with the scattered update patterns typical of database I/O, the utilization of storage must be controlled in order to achieve the high write throughput of which the subsystem is capable. In addition, the presence of data locality makes it important to take the past history of data into account in determining the next area of storage to be garbage-collected.
Incremental recovery in main memory database systems Recovery activities, like checkpointing and restart, in traditional database management systems are performed in a quiescent state where no transactions are active. This approach impairs the performance of online transaction processing systems, especially when a large volatile memory is used. An incremental scheme for performing recovery in main memory database systems (MMDBs), in parallel with transaction execution, is presented. A page-based incremental restart algorithm that enables the resumption of transaction processing as soon as the system is up is proposed. Pages are recovered individually and according to the demands of the post-crash transactions. A method for propagating updates from main memory to the backup database on disk is also provided. The emphasis is on decoupling the I/O activities related to the propagation to disk from the forward transaction execution in memory. The authors also construct a high-level recovery manager based on operation logging on top of the page-based algorithms. The proposed algorithms are motivated by the characteristics of large MMDBs, and exploit the technology of nonvolatile RAM.
The architecture of a fault-tolerant cached RAID controller RAID-5 arrays need 4 disk accesses to update a data block—2 to read old data and parity, and 2 to write new data and parity. Schemes previously proposed to improve the update performance of such arrays are the Log-Structured File System [10] and the Floating Parity Approach [6]. Here, we consider a third approach, called Fast Write, which eliminates disk time from the host response time to a write, by using a Non-Volatile Cache in the disk array controller. We examine three alternatives for handling Fast Writes and describe a hierarchy of destage algorithms with increasing robustness to failures. These destage algorithms are compared against those that would be used by a disk controller employing mirroring. We show that array controllers require considerably more (2 to 3 times more) bus bandwidth and memory bandwidth than do disk controllers that employ mirroring. So, array controllers that use parity are likely to be more expensive than controllers that do mirroring, though mirroring is more expensive when both controllers and disks are considered.
Read Optimized File System Designs: A Performance Evaluation This paper presents a performance comparison of several file system allocation policies. The file systems are designed to provide high bandwidth between disks and main memory by taking advantage of parallelism in an underlying disk array, catering to large units of transfer, and minimizing the bandwidth dedicated to the transfer of meta data. All of the file systems described use a mul- tiblock allocation strategy which allows both large and small files to be allocated efficiently. Simulation results show that these multiblock policies result in systems that are able to utilize a large percentage of the underlying disk bandwidth; more than 90% in sequential cases. As general purpose systems are called upon to support more data intensive applications such as databases and super- computing, these policies offer an opportunity to provide superior performance to a larger class of users.
Performance Evaluation of Grid Based Multi-Attibute Record Declustering Methods We focus on multi-attribute declustering methods which are based on some type of grid-based partitioning of the data space. Theoretical results are derived which show that no declustering method can be strictly optimal for range queries if the number of disks is greater than 5. A detailed performance evaluation is carried out to see how various declustering schemes perform under a wide range of query and database scenarios (both relative to each other and to the optimal). Parameters that are varied include shape and size of queries, database size, number of attributes and the number of disks. The results show that information about common queries on a relation is very important and ought to be used in deciding the declustering for it, and that this is especially crucial for small queries. Also, there is no clear winner, and as such parallel database systems must support a number of declustering methods
An Evaluation of Multiple-Disk I/O Systems Alternative ways of configuring an I/O subsystem with multiple disks to improve the I/O performance are considered. Specifically, the author consider disk synchronization, data declustering/disk striping, and a combination of both these approaches. They evaluate many different organizations that have not been considered before. The effects of block size and other parameters of the system are examined. Two different workloads are considered for the evaluation: a file/transaction system workload and a scientific applications workload. Through simulations it is shown that synchronized organizations perform better than other organizations at very low request rates; that there is a tradeoff in the amount of declustering/synchronization to be used in a system; and that systems with higher parallelism in reading a file perform better in a scientific workload.
Experiments with a New Boosting Algorithm In an earlier paper [9], we introduced a new “boosting” algorithm called AdaBoost which,theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing.We also introduced the related notion of a “pseudo-loss” which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate.In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems.We performed two sets of experiments. The first set compared boosting to Breiman’s [1]“bagging” method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem
Two remarks on the power of counting The relationship between the polynomial hierarchy and Valiant's class #P is at present unknown. We show that some low portions of the polynomial hierarchy, namely deterministic polynomial algorithms using an NP oracle at most a logarithmic number of times, can be simulated by one #P computation. We also show that the class of problems solvable by polynomial-time nondeterministic Turing machines which accept whenever there is an odd number of accepting computations is idempotent, that is, closed under usage of oracles from the same class.
Multimedia file serving with the OS/390 LAN server The rapidly increasing storage and transmission capacities of computers and the progress in compression algorithms make it possible to build multimedia applications that include audio and video. Such applications range from educational and training videos, delivered to desktops in schools and enterprises, to entertainment services at home. Applications developed for stand-alone personal computers can be deployed in distributed systems without change by using the clientlsewer model and file sewers that allow the sharing of applications among many users, The OSl390" LAN Sewer has been enhanced to support multimedia data delivery. Resource management and admission control, wide disk striping to provide high data bandwidths, and multimedia-specific performance enhancements have been added. The resulting sewer benefits from the robustness, scalability, and flexibility of the S/3@ system environment, which allows it to move into new multimedia applications. Multimedia support on a robust, widely installed platform with little or no additional hardware requirements gives customers the opportunity to enhance their existing applications with multimedia features and then expand their capacity as the demands of the applications increase. This multimedia server platform is in use with several interesting applications.
Exploiting Web Log Mining for Web Cache Enhancement Improving the performance of the Web is a crucial requirement, since its popularity resulted in a large increase in the user perceived latency. In this paper, we describe a Web caching scheme that capitalizes on prefetching. Prefetching refers to the mechanism of deducing forthcoming page accesses of a client, based on access log information. Web log mining methods are exploited to provide effective prediction of Web-user accesses. The proposed scheme achieves a coordination between the two techniques (i.e., caching and prefetching). The prefetched documents are accommodated in a dedicated part of the cache, to avoid the drawback of incorrect replacement of requested documents. The requirements of the Web are taken into account, compared to the existing schemes for buffer management in database and operating systems. Experimental results indicate the superiority of the proposed method compared to the previous ones, in terms of improvement in cache performance.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2496
0.2496
0.002937
0.000234
0.000065
0.000048
0.000022
0.000008
0
0
0
0
0
0
Experimental evaluation of pheromone models in ACOPlan In this paper the system ACOPlan for planning with non uniform action cost is introduced and analyzed. ACOPlan is a planner based on the ant colony optimization framework, in which a colony of planning ants searches for near optimal solution plans with respect to an overall plan cost metric. This approach is motivated by the strong similarity between the process used by artificial ants to build solutions and the methods used by state---based planners to search solution plans. Planning ants perform a stochastic and heuristic based search by interacting through a pheromone model. The proposed heuristic and pheromone models are presented and compared through systematic experiments on benchmark planning domains. Experiments are also provided to compare the quality of ACOPlan solution plans with respect to state of the art satisficing planners. The analysis of the results confirm the good performance of the Action---Action pheromone model and points out the promising performance of the novel Fuzzy---Level---Action pheromone model. The analysis also suggests general principles for designing performant pheromone models for planning and further extensions of ACOPlan to other optimization models.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Agent planning programs This work proposes a novel high-level paradigm, agent planning programs, for modeling agents behavior, which suitably mixes automated planning with agent-oriented programming. Agent planning programs are finite-state programs, possibly containing loops, whose atomic instructions consist of a guard, a maintenance goal, and an achievement goal, which act as precondition-invariance-postcondition assertions in program specification. Such programs are to be executed in possibly nondeterministic planning domains and their execution requires generating plans that meet the goals specified in the atomic instructions, while respecting the program control flow. In this paper, we define the problem of automatically synthesizing the required plans to execute an agent planning program, propose a solution technique based on model checking of two-player game structures, and use it to characterize the worst-case computational complexity of the problem as EXPTIME-complete. Then, we consider the case of deterministic domains and propose a different technique to solve agent planning programs, which is based on iteratively solving classical planning problems and on exploiting goal preferences and plan adaptation methods. Finally, we study the effectiveness of this approach for deterministic domains through an experimental analysis on well-known planning domains.
Red-black planning: A new systematic approach to partial delete relaxation. To date, delete relaxation underlies some of the most effective heuristics for deterministic planning. Despite its success, however, delete relaxation has significant pitfalls in many important classes of planning domains, and it has been a challenge from the outset to devise heuristics that take some deletes into account. We herein devise an elegant and simple method for doing just that. In the context of finite-domain state variables, we define red variables to take the relaxed semantics, in which they accumulate their values rather than switching between them, as opposed to black variables that take the regular semantics. Red–black planning then interpolates between relaxed planning and regular planning simply by allowing a subset of variables to be painted red. We investigate the tractability region of red–black planning, extending Chen and Giménez' characterization theorems for regular planning to the more general red–black setting. In particular, we identify significant islands of tractable red–black planning, use them to design practical heuristic functions, and experiment with a range of “painting strategies” for automatically choosing the red variables. Our experiments show that these new heuristic functions can improve significantly on the state of the art in satisficing planning.1
PROST: Probabilistic Planning Based on UCT.
How good is almost perfect? Heuristic search using algorithms such as A* and IDA* is the prevalent method for obtaining optimal sequential solutions for classical planning tasks. Theoretical analyses of these classical search algorithms, such as the well-known results of Pohl, Gaschnig and Pearl, suggest that such heuristic search algorithms can obtain better than exponential scaling behaviour, provided that the heuristics are accurate enough. Here, we show that for a number of common planning benchmark domains, including ones that admit optimal solution in polynomial time, general search algorithms such as A* must necessarily explore an exponential number of search nodes even under the optimistic assumption of almost perfect heuristic estimators, whose heuristic error is bounded by a small additive constant. Our results shed some light on the comparatively bad performance of optimal heuristic search approaches in "simple" planning domains such as GRIPPER. They suggest that in many applications, further improvements in run-time require changes to other parts of the search algorithm than the heuristic estimator.
Some Results on the Complexity of Planning with Incomplete Information Planning with incomplete information may mean a number ofdifferent things; that certain facts of the initial state are not known, thatoperators can have random or nondeterministic effects, or that the planscreated contain sensing operations and are branching. Study of the complexityof incomplete information planning has so far been concentratedon probabilistic domains, where a number of results have been found. Weexamine the complexity of planning in nondeterministic propositional...
Extended stable semantics for normal and disjunctive programs
The design and implementation of VAMPIRE In this article we describe VAMPIRE: a high-performance theorem prover for first-order logic. As our description is mostly targeted to the developers of such systems and specialists in automated reasoning, it focuses on the design of the system and some key implementation features. We also analyze the performance of the prover at CASC-JC.
The SPHINX-II Speech Recognition System: An Overview In order for speech recognizers to deal with increased task perplexity, speaker variation, and environment variation, improved speech recognition is critical. Steady progress has been made along these three dimensions at Carnegie Mellon. In this paper, we review the SPHINX-II speech recognition system and summarize our recent efforts on improved speech recognition.
Nonmonotonic reasoning in the framework of situation calculus Most of the solutions proposed to the Yale shooting problem haveeither introduced new nonmonotonic reasoning methods (generally involvingtemporal priorities) or completely reformulated the domainaxioms to represent causality explicitly. This paper presents a newsolution based on the idea that since the abnormality predicate takesa situational argument, it is important for the meanings of the situationsto be held constant across the various models being compared.This is accomplished by a...
Monotonic reductions, representative equivalence, and compilation of intractable problems The idea of preprocessing part of the input of a problem in order to improve efficiency has been employed by several researchers in several areas of computer science. In this article, we show sufficient conditions to prove that an intractable problem cannot be efficiently solved even allowing an exponentially long preprocessing phase. The generality of such conditions is shown by applying them to various problems coming from different fields. While the results may seem to discourage the use of compilation, we present some evidence that such negative results are useful in practice.
Scheduling a mixed interactive and batch workload on a parallel, shared memory supercomputer
The closure of monadic NP It is a well-known result of Fagin that the complexity class NP coincides with the class of problems expressible in existential second-order logic ( Σ 1 1 ), which allows sentences consisting of a string of existential second-order quantifiers followed by a first-order formula. Monadic NP is the class of problems expressible in monadic Σ 1 1 , i.e., Σ 1 1 with the restriction that the second-order quantifiers are all unary and hence range only over sets (as opposed to ranging over, say, binary relations). For example, the property of a graph being 3-colorable belongs to monadic NP, because 3-colorability can be expressed by saying that there exists three sets of vertices such that each vertex is in exactly one of the sets and no two vertices in the same set are connected by an edge. Unfortunately, monadic NP is not a robust class, in that it is not closed under first-order quantification. We define closed monadic NP to be the closure of monadic NP under first-order quantification and existential unary second-order quantification. Thus, closed monadic NP differs from monadic NP in that we allow the possibility of arbitrary interleavings of first-order quantifiers among the existential unary second-order quantifiers. We show that closed monadic NP is a natural, rich, and robust subclass of NP. As evidence for its richness, we show that not only is it a proper extension of monadic NP, but that it contains properties not in various other extensions of monadic NP. In particular, we show that closed monadic NP contains an undirected graph property not in the closure of monadic NP under first-order quantification and Boolean operations. Our lower-bound proofs require a number of new game-theoretic techniques.
Representing the process semantics in the situation calculus This paper presents a formal method based on the high-level semantics of processes to reason about continuous change. With a case study we show how the semantics of processes can be integrated with the situation calculus. The soundness and completeness of situation calculus with respect to the process semantics are proven. Furthermore, the logical programming is implemented to support the semantics of processes with the situation calculus.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.2
0.2
0.1
0.04
0.006061
0
0
0
0
0
0
0
0
0
Monotonic reductions, representative equivalence, and compilation of intractable problems The idea of preprocessing part of the input of a problem in order to improve efficiency has been employed by several researchers in several areas of computer science. In this article, we show sufficient conditions to prove that an intractable problem cannot be efficiently solved even allowing an exponentially long preprocessing phase. The generality of such conditions is shown by applying them to various problems coming from different fields. While the results may seem to discourage the use of compilation, we present some evidence that such negative results are useful in practice.
The closure of monadic NP It is a well-known result of Fagin that the complexity class NP coincides with the class of problems expressible in existential second-order logic ( Σ 1 1 ), which allows sentences consisting of a string of existential second-order quantifiers followed by a first-order formula. Monadic NP is the class of problems expressible in monadic Σ 1 1 , i.e., Σ 1 1 with the restriction that the second-order quantifiers are all unary and hence range only over sets (as opposed to ranging over, say, binary relations). For example, the property of a graph being 3-colorable belongs to monadic NP, because 3-colorability can be expressed by saying that there exists three sets of vertices such that each vertex is in exactly one of the sets and no two vertices in the same set are connected by an edge. Unfortunately, monadic NP is not a robust class, in that it is not closed under first-order quantification. We define closed monadic NP to be the closure of monadic NP under first-order quantification and existential unary second-order quantification. Thus, closed monadic NP differs from monadic NP in that we allow the possibility of arbitrary interleavings of first-order quantifiers among the existential unary second-order quantifiers. We show that closed monadic NP is a natural, rich, and robust subclass of NP. As evidence for its richness, we show that not only is it a proper extension of monadic NP, but that it contains properties not in various other extensions of monadic NP. In particular, we show that closed monadic NP contains an undirected graph property not in the closure of monadic NP under first-order quantification and Boolean operations. Our lower-bound proofs require a number of new game-theoretic techniques.
The size of MDP factored policies Policies of Markov Decision Processes (MDPs) tell the next action to execute, given the current state and (possibly) the history of actions executed so far. Factorization is used when the number of states is exponentially large: both the MDP and the policy can be then represented using a compact form, for example employing circuits. We prove that there are MDPs whose optimal policies require exponential space evenin factored form.
A branching heuristics for quantified renamable horn formulas Many solvers have been designed for $\mathcal{QBF}$s, the validity problem for Quantified Boolean Formulas for the past few years. In this paper, we describe a new branching heuristics whose purpose is to promote renamable Horn formulas. This heuristics is based on Hébrard's algorithm for the recognition of such formulas. We present some experimental results obtained by our qbf solver Qbfl with the new branching heuristics and show how its performances are improved.
Relationships between nondeterministic and deterministic tape complexities The amount of storage needed to simulate a nondeterministic tape bounded Turingmachine on a deterministic Turing machine is investigated. Results include the following: Theorem. A nondeterministic L(n)-tape bounded Turing machine can be simulated by a deterministic [L(n)]^2-tape bounded Turing machine, provided L(n)=log"2n. Computations of nondeterministic machines are shown to correspond to threadings of certain mazes. This correspondence is used to produce a specific set, namely the set of all codings of threadable mazes, such that, if there is any set which distinguishes nondeterministic tape complexity classes from deterministic tape complexity classes, then this is one such set.
The Minimization Problem for Boolean Formulas More than a quarter of a century ago, the question of the complexity of determining whether a given Boolean formula is minimal motivated Meyer and Stockmeyer to define the polynomial hierarchy. This problem (in the standard formalized version---that of Garey and Johnson) has been known for decades to be coNP-hard and in NPNP, and yet no one had even been able to establish (many-one) NP-hardness. In this paper, we show that and more: The problem in fact is (many-one) hard for parallel access to NP.
The complexity of combinatorial problems with succinct input representation Several languages for the succinct representation of the instances of combinatorial problems are investigated. These languages have been introduced in [20, 2] and [5] where it has been shown that describing the instances by these languages causes a blow-up of the complexities of some problems. In the present paper the descriptional power of these languages is compared by estimating the complexities of some combinatorial problems in terms of completeness in suitable classes of the “counting polynomial-time hierarchy” which is introduced here. It turns out that some of the languages are not comparable, unless P=NP Some problems left open in [2] are solved.
On the Complexity of Plan Adaptation by Derivational Analogy in a Universal Classical Planning Framework In this paper we present an algorithm called DerUCP, which can be regarded as a general model for plan adaptation using Derivational Analogy. Using DerUCP, we show that previous results on the complexity of plan adaptation do not apply to Derivational Analogy. We also show that Derivational Analogy can potentially produce exponential reductions in the size of the search space generated by a planning system.
Fixed-Parameter Tractability and Completeness I: Basic Results For many fixed-parameter problems that are trivially soluable in polynomial-time, such as ($k$-)DOMINATING SET, essentially no better algorithm is presently known than the one which tries all possible solutions. Other problems, such as ($k$-)FEEDBACK VERTEX SET, exhibit fixed-parameter tractability: for each fixed $k$ the problem is soluable in time bounded by a polynomial of degree $c$, where $c$ is a constant independent of $k$. We establish the main results of a completeness program which addresses the apparent fixed-parameter intractability of many parameterized problems. In particular, we define a hierarchy of classes of parameterized problems $FPT \subseteq W[1] \subseteq W[2] \subseteq \cdots \subseteq W[SAT] \subseteq W[P]$ and identify natural complete problems for $W[t]$ for $t \geq 2$. (In other papers we have shown many problems complete for $W[1]$.) DOMINATING SET is shown to be complete for $W[2]$, and thus is not fixed-parameter tractable unless INDEPENDENT SET, CLIQUE, IRREDUNDANT SET and many other natural problems in $W[2]$ are also fixed-parameter tractable. We also give a compendium of currently known hardness results as an appendix.
On the complexity of the containment problem for conjunctive queries with built-in predicates
A logic-based calculus of events Formal Logic can be used to represent knowledge of many kinds for many purposes. It can be used to formalize programs, program specifications, databases, legislation, and natural language in general. For many such applications of logic a representation of time is necessary. Although there have been several attempts to formalize the notion of time in classical first-order logic, it is still widely believed that classical logic is not adequate for the representation of time and that some form of non-classical Temporal Logic is needed. In this paper, we shall outline a treatment of time, based on the notion of event, formalized in the Horn clause subset of classical logic augmented with negation as failure. The resulting formalization is executable as a logic program. We use the term ''event calculus'' to relate it to the well-known ''situation calculus'' (McCarthy and Hayes 1969). The main difference between the two is conceptual: the situation calculus deals with global states whereas the event calculus deals with local events and time periods. Like the event calculus, the situation calculus can be formalized by means of Horn clauses augmented with negation by failure (Kowalski 1979). The main intended applications investigated in this paper are the updating of data- bases and narrative understanding. In order to treat both cases uniformly we have taken the view that an update consists of the addition of new knowledge to a knowledge base. The effect of explicit deletion of information in conventional databases is obtained without deletion by adding new knowledge about the end of the period of time for which the information holds.
Informed prefetching and caching The underutilization of disk parallelism and file cache buffers by traditional file systems induces I/O stall time that degrades the performance of modern microprocessor-based systems. In this paper, we present aggressive mechanisms that tailor file system resource management to the needs of I/O-intensive applications. In particular, we show how to use application-disclosed access patterns (hints) to expose and exploit I/O parallelism and to allocate dynamically file buffers among three competing demands: prefetching hinted blocks, caching hinted blocks for reuse, and caching recently used data for unhinted accesses. Our approach estimates the impact of alternative buffer allocations on application execution time and applies a cost-benefit analysis to allocate buffers where they will have the greatest impact. We implemented informed prefetching and caching in DEC''s OSF/1 operating system and measured its performance on a 150 MHz Alpha equipped with 15 disks running a range of applications including text search, 3D scientific visualization, relational database queries, speech recognition, and computational chemistry. Informed prefetching reduces the execution time of the first four of these applications by 20% to 87%. Informed caching reduces the execution time of the fifth application by up to 30%.
Tractable Monotone Temporal Planning.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.033348
0.027653
0.026649
0.008333
0.00492
0.00294
0.001245
0.000243
0.000057
0.000009
0
0
0
0
Multimedia Retrieval via Deep Learning to Rank Many existing learning-to-rank approaches are incapable of effectively modeling the intrinsic interaction relationships between the feature-level and ranking-level components of a ranking model. To address this problem, we propose a novel joint learning-to-rank approach called Deep Latent Structural SVM (DL-SSVM), which jointly learns deep neural networks and latent structural SVM (connected by a set of latent feature grouping variables) to effectively model the interaction relationships at two levels (i.e., feature-level and ranking-level). To make the joint learning problem easier to optimize, we present an effective auxiliary variable-based alternating optimization approach with respect to deep neural network learning and structural latent SVM learning. Experimental results on several challenging datasets have demonstrated the effectiveness of the proposed learning to rank approach in real-world information retrieval.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Discrete Event Calculus with Branching Time.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Satellite-Based Retrieval of Precipitable Water Vapor Over Land by Using a Neural Network Approach A method based on neural networks is proposed to retrieve integrated precipitable water vapor (IPWV) over land from brightness temperatures measured by the Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E). Water vapor values provided by European Centre for Medium-Range Weather Forecasts (ECMWF) were used to train the network. The performance of the network was demonstrated by using a separate data set of AMSR-E observations and the corresponding IPWV values from ECMWF. Our study was optimized over two areas in Northern and Central Italy. Good agreements on the order of 0.24 cm and 0.33 cm rms, respectively, were found between neural network retrievals and ECMWF IPWV data during clear-sky conditions. In the presence of clouds, an rms of the order of 0.38 cm was found for both areas. In addition, results were compared with the IPWV values obtained from in situ instruments, a ground-based radiometer, and a global positioning system (GPS) receiver located in Rome, and a local network of GPS receivers in Como. An rms agreement of 0.34 cm was found between the ground-based radiometer and the neural network retrievals, and of 0.35 cm and 0.40 cm with the GPS located in Rome and Como, respectively.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Bioinformatics challenges for genome-wide association studies Motivation: The sequencing of the human genome has made it possible to identify an informative set of 1 million single nucleotide polymorphisms (SNPs) across the genome that can be used to carry out genome-wide association studies (GWASs). The availability of massive amounts of GWAS data has necessitated the development of new biostatistical methods for quality control, imputation and analysis issues including multiple testing. This work has been successful and has enabled the discovery of new associations that have been replicated in multiple studies. However, it is now recognized that most SNPs discovered via GWAS have small effects on disease susceptibility and thus may not be suitable for improving health care through genetic testing. One likely explanation for the mixed results of GWAS is that the current biostatistical analysis paradigm is by design agnostic or unbiased in that it ignores all prior knowledge about disease pathobiology. Further, the linear modeling framework that is employed in GWAS often considers only one SNP at a time thus ignoring their genomic and environmental context. There is now a shift away from the biostatistical approach toward a more holistic approach that recognizes the complexity of the genotype–phenotype relationship that is characterized by significant heterogeneity and gene–gene and gene–environment interaction. We argue here that bioinformatics has an important role to play in addressing the complexity of the underlying genetic basis of common human diseases. The goal of this review is to identify and discuss those GWAS challenges that will require computational methods. Contact: [email protected]
EpiGPU: exhaustive pairwise epistasis scans parallelized on consumer level graphics cards. Motivation: Hundreds of genome-wide association studies have been performed over the last decade, but as single nucleotide polymorphism ( SNP) chip density has increased so has the computational burden to search for epistasis [ for n SNPs the computational time resource is O(n(n-1)/2)]. While the theoretical contribution of epistasis toward phenotypes of medical and economic importance is widely discussed, empirical evidence is conspicuously absent because its analysis is often computationally prohibitive. To facilitate resolution in this field, tools must be made available that can render the search for epistasis universally viable in terms of hardware availability, cost and computational time. Results: By partitioning the 2D search grid across the multicore architecture of a modern consumer graphics processing unit (GPU), we report a 92x increase in the speed of an exhaustive pairwise epistasis scan for a quantitative phenotype, and we expect the speed to increase as graphics cards continue to improve. To achieve a comparable computational improvement without a graphics card would require a large compute-cluster, an option that is often financially non-viable. The implementation presented uses OpenCL-an open-source library designed to run on any commercially available GPU and on any operating system.
An efficient algorithm to perform multiple testing in epistasis screening. Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn's disease.In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn's disease (CD) data.Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn's disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.
An empirical comparison of several recent epistatic interaction detection methods. Many new methods have recently been proposed for detecting epistatic interactions in GWAS data. There is, however, no in-depth independent comparison of these methods yet.Five recent methods-TEAM, BOOST, SNPHarvester, SNPRuler and Screen and Clean (SC)-are evaluated here in terms of power, type-1 error rate, scalability and completeness. In terms of power, TEAM performs best on data with main effect and BOOST performs best on data without main effect. In terms of type-1 error rate, TEAM and BOOST have higher type-1 error rates than SNPRuler and SNPHarvester. SC does not control type-1 error rate well. In terms of scalability, we tested the five methods using a dataset with 100 000 SNPs on a 64 bit Ubuntu system, with Intel (R) Xeon(R) CPU 2.66 GHz, 16 GB memory. TEAM takes ~36 days to finish and SNPRuler reports heap allocation problems. BOOST scales up to 100 000 SNPs and the cost is much lower than that of TEAM. SC and SNPHarvester are the most scalable. In terms of completeness, we study how frequently the pruning techniques employed by these methods incorrectly prune away the most significant epistatic interactions. We find that, on average, 20% of datasets without main effect and 60% of datasets with main effect are pruned incorrectly by BOOST, SNPRuler and SNPHarvester.The software for the five methods tested are available from the URLs below. TEAM: http://csbio.unc.edu/epistasis/download.php BOOST: http://ihome.ust.hk/~eeyang/papers.html. SNPHarvester: http://bioinformatics.ust.hk/SNPHarvester.html. SNPRuler: http://bioinformatics.ust.hk/SNPRuler.zip. Screen and Clean: http://wpicr.wpic.pitt.edu/WPICCompGen/[email protected].
FPGA-based Acceleration of Detecting Statistical Epistasis in GWAS. Genotype-by-genotype interactions (epistasis) are believed to be a significant source of unexplained genetic variation causing complex chronic diseases but have been ignored in genome-wide association studies (GWAS) due to the computational burden of analysis. In this work we show how to benefit from FPGA technology for highly parallel creation of contingency tables in a systolic chain with a subsequent statistical test. We present the implementation for the FPGA-based hardware platform RIVYERA S6-LX150 containing 128 Xilinx Spartan6-LX150 FPGAs. For performance evaluation we compare against the method iLOCi[9]. iLOCi claims to outperform other available tools in terms of accuracy. However, analysis of a dataset from the Wellcome Trust Case Control Consortium (WTCCC) with about 500,000 SNPs and 5,000 samples still takes about 19hours on a MacPro workstation with two Intel Xeon quad-core CPUs, while our FPGA-based implementation requires only 4minutes.
Random Forests Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, ***, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.
Bioinformatics Research and Applications: 8th International Symposium, ISBRA 2012, Dallas, TX, USA, May 21-23, 2012. Proceedings
Object Recognition from Local Scale-Invariant Features An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection.These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales.The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds.
ConGolog, a concurrent programming language based on the situation calculus As an alternative to planning, an approach to high-level agent control based onconcurrent program execution is considered. A formal definition in the situationcalculus of such a programming language is presented and illustrated with someexamples. The language includes facilities for prioritizing the execution of concurrentprocesses, interrupting the execution when certain conditions become true,and dealing with exogenous actions. The language differs from other procedural formalismsfor...
An Introduction to MCMC for Machine Learning This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.
A continuum of disk scheduling algorithms A continuum of disk scheduling algorithms, V(R), having endpoints V(0) = SSTF and V(1) = SCAN, is defined. V(R) maintains a current SCAN direction (in or out) and services next the request with the smallest effective distance. The effective distance of a request that lies in the current direction is its physical distance (in cylinders) from the read/write head. The effective distance of a request in the opposite direction is its physical distance plus R x (total number of cylinders on the disk). By use of simulation methods, it is shown that this definitional continuum also provides a continuum in performance, both with respect to the mean and with respect to the standard deviation of request waiting time. For objective functions that are linear combinations of the two measures, &mgr;w + kow, intermediate points of the continuum are seen to provide performance uniformly superior to both SSTF and SCAN. A method of implementing V(R) and the results of its experimental use in a real system are presented.
Computational Politics: Electoral Systems This paper discusses three computation-related results in the study of electoral systems: 1. Determining the winner in Lewis Carroll's 1876 electoral system is complete for parallel access to NP [22]. 2. For any electoral system that is neutral, consistent, and Condorcet, determining the winner is complete for parallel access to NP [21]. 3. For each census in US history, a simulated annealing algorithm yields provably fairer (in a mathematically rigorous sense) congressional apportionments than any of the classic algorithms--even the algorithm currently used in the United States [24].
Adaptive placement of method executions within a customizable distributed object-based runtime system: design, implementation and performance Abstract: This paper presents the design and implementation of a mechanism aimed at enhancing the performance of distributed object-based applications. This goal is achieved by means of a new algorithm implementing placement of method executions that adapts to processors' load and to objects' characteristics, the latter allowing to approximate the cost of methods' re-mote execution The behavior of the proposed placement algorithm is examined by providing performance measures obtained from its integration within a customizable distributed object-based runtime system. In particular, the cost of method executions using our algorithm is compared with the cost resulting from the standard placement technique that consists of executing any method on the storing node of its embedding object.
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
1.037004
0.035552
0.034901
0.030923
0.022263
0.000194
0.000001
0
0
0
0
0
0
0
Random duplicate storage strategies for load balancing in multimedia servers An important issue in multimedia servers is disk load balancing. In this paper we use randomization and data redundancy to enable good load balancing. We focus on duplicate storage strategies, i.e., each data block is stored twice. This means that a request for a block can be serviced by two disks. A consequence of such a storage strategy is that we have to decide for each block which disk to use for its retrieval. This results in a so-called retrieval selection problem. We describe a graph model for duplicate storage strategies and derive polynomial time optimization algorithms for the retrieval selection problems of several storage strategies. Our model unifies and generalizes chained declustering and random duplicate assignment strategies. Simu- lation results and a probabilistic analysis complete this paper.
Random duplicated assignment: an alternative to striping in video servers An approach is presented for storing video data in large disk arrays. Video data is stored by assigning a number of copies of each data block to different, randomly chosen disks, where the number of copies may depend on the popularity of the corresponding video data. The approach offers an interesting alternative to the well-known striping techniques. Its use results in smaller response times and lower disk and RAM costs if many continuous variable-rate data streams have to be sustained simultaneously. It also offers some practical advantages relating to reliability and extendability.Based on this storage approach, three retrieval algorithms are presented that determine, for a given batch of data blocks, from which disk each of the data blocks should be retrieved. The performance of these algorithms is evaluated from an average-case as well as a worst-case perspective.
Simple randomized mergesort on parallel disks %x We consider the problem of sorting a file of N records on the D-disk model of parallel I/O in which there are two sources of parallelism. Records are transferred to and from disk concurrently in blocks of B contiguous records. In each I/O operation, up to one block can be transferred to or from each of the D disks in parallel. We propose a simple, efficient, randomized mergesort algorithm called SRM that uses a forecast-and-flush approach to overcome the inherent difficulties of simple merging on parallel disks. SRM exhibits a limited use of randomization and also has a useful deterministic version. Generalizing the technique of forecasting, our algorithm is able to read in, at any time, the %22right%22 block from any disk, and using the technique of flushing, our algorithm evicts, without any I/O overhead, just the %22right%22 blocks from memory to make space for new ones to be read in. The disk layout of SRM is such that it enjoys perfect write parallelism, avoiding fundamental inefficiencies of previous mergesort algorithms. Using a novel reduction of various maximum occupancy problems we are able to derive an analytical upper bound on SRM`s expected overhead valid for arbitrary inputs. The upper bound derived on expected I/O performance of SRM indicates that SRM is provably better than disk-striped mergesort (DSM) for realistic parameter values D, M, and B. Average-case simulations show further improvement on the analytical upper bound. SRM outperforms DSM even when the number D or parallel disks is small, something that previously proposed optimal external sorting algorithms lacked.
A comparison of high-availability media recovery techniques We compare two high-availability techniques for recovery from media failures in database systems. Both techniques achieve high availability by having two copies of all data and indexes, so that recovery is immediate. “Mirrored declustering” spreads two copies of each relation across two identical sets of disks. “Interleaved declustering” spreads two copies of each relation across one set of disks while keeping both copies of each tuple on separate disks. Both techniques pay the same costs of doubling storage requirements and requiring updates to be applied to both copies.Mirroring offers greater simplicity and universality. Recovery can be implemented at lower levels of the system software (e.g., the disk controller). For architectures that do not share disks globally, it allows global and local cluster indexes to be independent. Also, mirroring does not require data to be declustered (i.e., spread over multiple disks).Interleaved declustering offers significant improvements in recovery time, mean time to loss of both copies of some data, throughput during normal operation, and response time during recovery. For all architectures, interleaved declustering enables data to be spread over twice as many disks for improved load balancing. We show how tuning for interleaved declustering is simplified because it is dependent only on a few parameters that are usually well known for a specific workload and system configuration.
Near-Optimal Parallel Prefetching and Caching Recently there has been a great deal of interest in the operating systems research community in prefetching and caching data from parallel disks, as a technique for enabling serial applications to improve input--output (I/O) performance. In this paper, algorithms are considered for integrated prefetching and caching in a model with a fixed-size cache and any number of backing storage devices (disks). The integration of caching and prefetching with a single disk was previously considered by Cao, Felten, Karlin, and Li. Here, it is shown that the natural extension of their aggressive algorithm to the parallel disk case is suboptimal by a factor near the number of disks in the worst case. The main result is a new algorithm, reverse aggressive, with near-optimal performance for integrated prefetching and caching in the presence of multiple disks.
RAID: high-performance, reliable secondary storage Disk arrays were proposed in the 1980s as a way to use parallelism between multiple disks to improve aggregate I/O performance. Today they appear in the product lines of most major computer manufacturers. This article gives a comprehensive overview of disk arrays and provides a framework in which to organize current and future work. First, the article introduces disk technology and reviews the driving forces that have popularized disk arrays: performance and reliability. It discusses the two architectural techniques used in disk arrays: striping across multiple disks to improve performance and redundancy to improve reliability. Next, the article describes seven disk array architectures, called RAID (Redundant Arrays of Inexpensive Disks) levels 0–6 and compares their performance, cost, and reliability. It goes on to discuss advanced research and implementation topics such as refining the basic RAID levels to improve performance and designing algorithms to maintain data consistency. Last, the article describes six disk array prototypes of products and discusses future opportunities for research, with an annotated bibliography disk array-related literature.
Multi-Join Optimization for Symmetric Multiprocessors
CMD: A Multidimensional Declustering Method for Parallel Data Systems
Reordering Query Execution in Tertiary Memory Databases In the relational model the order of fetching data does not affect query correctness. This flexibility is exploited in query optimization by statically reordering data accesses. However, once a query is optimized, it is executed in a fixed order in most systems, with the result that data requests are made in a fixed order. Only limited forms of runtime reordering can be provided by low-level device managers. More aggressive reordering strategies are essential in scenarios where the latency of access to data objects varies widely and dynamically, as in tertiary devices. This paper presents such a strategy. Our key innovation is to exploit dynamic reordering to match execution order to the optimal data fetch order, in all parts of the plan-tree. To demonstrate: the practicality of our approach and the impact of our optimizations, we report on a prototype implementation based on Postgres. Using our system, typical I/O cost for queries on tertiary memory databases is as much as an order of magnitude smaller than with conventional query processing techniques.
Predicting individual disease risk based on medical history The monumental cost of health care, especially for chronic disease treatment, is quickly becoming unmanageable. This crisis has motivated the drive towards preventative medicine, where the primary concern is recognizing disease risk and taking action at the earliest signs. However, universal testing is neither time nor cost efficient. We propose CARE, a Collaborative Assessment and Recommendation Engine, which relies only on a patient's medical history using ICD-9-CM codes in order to predict future diseases risks. CARE uses collaborative filtering to predict each patient's greatest disease risks based on their own medical history and that of similar patients. We also describe an Iterative version, ICARE, which incorporates ensemble concepts for improved performance. These novel systems require no specialized information and provide predictions for medical conditions of all kinds in a single run. We present experimental results on a Medicare dataset, demonstrating that CARE and ICARE perform well at capturing future disease risks.
Factored conditional restricted Boltzmann Machines for modeling motion style The Conditional Restricted Boltzmann Machine (CRBM) is a recently proposed model for time series that has a rich, distributed hidden state and permits simple, exact inference. We present a new model, based on the CRBM that preserves its most important computational properties and includes multiplicative three-way interactions that allow the effective interaction weight between two units to be modulated by the dynamic state of a third unit. We factor the three-way weight tensor implied by the multiplicative model, reducing the number of parameters from O(N3) to O(N2). The result is an efficient, compact model whose effectiveness we demonstrate by modeling human motion. Like the CRBM, our model can capture diverse styles of motion with a single set of parameters, and the three-way interactions greatly improve the model's ability to blend motion styles or to transition smoothly among them.
Accurate and efficient replaying of file system traces Years of innovation in file systems have been highly successful in improving their performance and functionality, but at the cost of complicating their interaction with the disk. A variety of techniques exist to ensure consistency and integrity of file ...
Global Reinforcement Learning in Neural Networks with Stochastic Synapses We have found a more general formulation of the REINFORCE learning principle which had been proposed by R. J. Williams for the case of artificial neural networks with stochastic cells ("Boltzmann machines"). This formulation has enabled us to apply the principle to global reinforcement learning in networks with deterministic neural cells but stochastic synapses, and to suggest two groups of new learning rules for such networks, including simple local rules. Numerical simulations have shown that at least for several popular benchmark problems one of the new learning rules may provide results on a par with the best known global reinforcement techniques.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.119872
0.076211
0.027725
0.024415
0.001256
0.00017
0.000024
0.00001
0.000002
0
0
0
0
0
Transactions on Shared Data: a coordination model Architecture neutrality, reliability, and support of reactive programs are the primary goals of the coordination and programming language model TSD (Transactions on Shared Data). The basic execution units are transactions that communicate through shared data. Data assigned to variables through unification are immutable; the presence or absence of data is used for synchronization. Transactions report their execution state (executing, succeeded or failed) to variables. Since execution states are visible and can be used for synchronization, all execution control strategies are representable within the proposed model.<>
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Simple data entanglement layouts with high reliability We study the reliability of open and close entanglements, two simple data distribution layouts for log-structured append-only storage systems. Both techniques use equal numbers of data and parity drives and generate their parity data by computing the exclusive or (XOR) of the most recently appended data with the contents of their last parity drive. While open entanglements maintain an open chain of data and parity drives, closed entanglements include the exclusive or of the contents of their first and last data drives. We evaluate five-year reliabilities of open and closed entanglements, for two different array sizes and drive failure rates. Our results show that open entanglements provide much better five-year reliabilities than mirroring and reduce the probability of a data loss by at least 90 percent over a period of five years. Closed entanglements perform even better and reduce the same probability by at least 98 percent.
Friendstore: cooperative online backup using trusted nodes Today, it is common for users to own more than tens of gigabytes of digital pictures, videos, experimental traces, etc. Although many users already back up such data on a cheap second disk, it is desirable to also seek off-site redundancies so that important data can survive threats such as natural disasters and operator mistakes. Commercial online backup service is expensive [1, 11]. An alternative solution is to use a peer-to-peer storage system. However, existing cooperative backup systems are plagued by two long-standing problems [3, 4, 9, 19, 27]: enforcing minimal availability from participating nodes, and ensuring that nodes storing others' backup data will not deny restore service in times of need.
Determining Fault Tolerance of XOR-Based Erasure Codes Efficiently We propose a new fault tolerance metric for XOR-based erasure codes: the minimal erasures list (MEL). A minimal erasure is a set of erasures that leads to irrecoverable data loss and in which every erasure is necessary and sufficient for this to be so. The MEL is the enumeration of all minimal erasures. An XOR-based erasure code has an irregular structure that may permit it to tolerate faults at and beyond its Hamming distance. The MEL completely describes the fault tolerance of an XOR-based erasure code at and beyond its Hamming distance; it is therefore a useful metric for comparing the fault tolerance of such codes. We also propose an algorithm that efficiently determines the MEL of an erasure code. This algorithm uses the structure of the erasure code to efficiently determine the MEL. We show that, in practice, the number of minimal erasures for a given code is much less than the total number of sets of erasures that lead to data loss: in our empirical results for one corpus of codes, there were over 80 times fewer minimal erasures. We use the proposed algorithm to identify the most fault tolerant XOR-based erasure code for all possible systematic erasure codes with up to seven data symbols and up to seven parity symbols.
A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days.
Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented.
Efficient sparse coding algorithms Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that cap- ture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimiza- tion problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field sur- round suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.
Synchronized Disk Interleaving A group of disks may be interleaved to speed up data transfers in a manner analogous to the speedup achieved by main memory interleaving. Conventional disks may be used for interleaving by spreading data across disks and by treating multiple disks as if they were a single one. Furthermore, the rotation of the interleaved disks may be synchronized to simplify control and also to optimize performance. In addition, check- sums may be placed on separate check-sum disks in order to improve reliability. In this paper, we study synchronized disk interleaving as a high-performance mass storage system architecture. The advantages and limitations of the proposed disk interleaving scheme are analyzed using the M/G/1 queueing model and compared to the conventional disk access mechanism.
Downward Separation Fails Catastrophically for Limited Nondeterminism Classes The $\beta$ hierarchy consists of classes $\beta_k={\rm NP}[logkn]\subseteq {\rm NP}$. Unlike collapses in the polynomial hierarchy and the Boolean hierarchy, collapses in the $\beta$ hierarchy do not seem to translate up, nor does closure under complement seem to cause the hierarchy to collapse. For any consistent set of collapses and separations of levels of the hierarchy that respects ${\rm P} = \beta_1\subseteq \beta_2\subseteq \cdots \subseteq {\rm NP}$, we can construct an oracle relative to which those collapses and separations hold; at the same time we can make distinct levels of the hierarchy closed under computation or not, as we wish. To give two relatively tame examples: for any $k \geq 1$, we construct an oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} \neq \beta_{k+2} \neq \cdots \] and another oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} = {\rm PSPACE}. \] We also construct an oracle relative to which $\beta_{2k} = \beta_{2k+1} \neq \beta_{2k+2}$ for all k.
Diagnostic reasoning with A-Prolog In this paper, we suggest an architecture for a software agent which operates a physical device and is capable of making observations and of testing and repairing the device's components. We present simplified definitions of the notions of symptom, candidate diagnosis, and diagnosis which are based on the theory of action language ${\cal AL}$. The definitions allow one to give a simple account of the agent's behavior in which many of the agent's tasks are reduced to computing stable models of logic programs.
ARIMA time series modeling and forecasting for adaptive I/O prefetching Bursty application I/O patterns, together with transfer limited storage devices, combine to create a major I/O bottleneck on parallel systems. This paper explores the use of time series models to forecast application I/O request times, then prefetching I/O requests during computation intervals to hide I/O latency. Experimental results with I/O intensive scientific codes show performance improvements compared to standard UNIX prefetching strategies.
When Multivariate Forecasting Meets Unsupervised Feature Learning - Towards a Novel Anomaly Detection Framework for Decision Support. Many organizations adopt information technologies to make intelligent decisions during operations. Time-series data plays a crucial role in supporting such decision making processes. Though current studies on time-series based decision making provide reasonably well results, the anomaly detection essence underling most of the scenarios and the plenitude of unlabeled data are largely overlooked and left unexplored. We argue that by using multivariate forecasting and unsupervised feature learning, these two important research gaps could be filled. We carried out two experiments in this study to testify our approach and the results showed that decision support performance was significantly improved. We also proposed a novel framework to integrate the two methods so that our approach may be generalized to a larger problem domain. We discussed the advantages, the limitations and the future work of our study. Both practical and theoretical contributions were also discussed in the paper. © 2012 by the AIS/ICIS Administrative Office All rights reserved.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.2
0.066667
0.033333
0.000219
0
0
0
0
0
0
0
0
0
0
Tolerating hard faults in microprocessor array structures In this paper, we present a hardware technique, called self-repairing array structures (SRAS), for masking hard faults in microprocessor array structures, such as the reorder buffer and branch history table. SRAS masks errors that could otherwise lead to slow system recoveries. To detect row errors, every write to a row is mirrored to a dedicated "check row". We then read out both the written row and check row and compare their results. To correct errors, SRAS maps out faulty array rows with a level of indirection.
A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Logic programs with classical negation
The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz.
Solving Advanced Reasoning Tasks Using Quantified Boolean Formulas We consider the compilation of different reasoning tasks into the evaluation problem of quantified boolean formulas (QBFs) as an approach to develop prototype reasoning sys- tems useful for, e.g., experimental purposes. Such a method is a natural generalization of a similar technique applied to NP-problems and has been recently proposed by other re- searchers. More specifically, we present translations of sev- eral well-known reasoning tasks from the area of nonmono- tonic reasoning into QBFs, and compare their implementa- tion in the prototype system QUIP with established NMR- provers. The results show reasonable performance, and docu- ment that the QBF approach is an attractive tool for rapid pro- totyping of experimental knowledge-representation systems.
Object Recognition from Local Scale-Invariant Features An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection.These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales.The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.111111
0.000365
0
0
0
0
0
0
0
0
0
0
0
0
3D object understanding with 3D Convolutional Neural Networks Feature engineering plays an important role in object understanding. Expressive discriminative features can guarantee the success of object understanding tasks. With remarkable ability of data abstraction, deep hierarchy architecture has the potential to represent objects. For 3D objects with multiple views, the existing deep learning methods can not handle all the views with high quality. In this paper, we propose a 3D convolutional neural network, a deep hierarchy model which has a similar structure with convolutional neural network. We employ stochastic gradient descent (SGD) method to pretrain the convolutional layer, and then a back-propagation method is proposed to fine-tune the whole network. Finally, we use the result of the two phases for 3D object retrieval. The proposed method is shown to out-perform the state-of-the-art approaches by experiments conducted on publicly available 3D object datasets.
Deep Fusion of Multiple Semantic Cues for Complex Event Recognition. We present a deep learning strategy to fuse multiple semantic cues for complex event recognition. In particular, we tackle the recognition task by answering how to jointly analyze human actions (who is doing what), objects (what), and scenes (where). First, each type of semantic features (e.g., human action trajectories) is fed into a corresponding multi-layer feature abstraction pathway, followed...
Local deep feature learning framework for 3D shape. For 3D shape analysis, an effective and efficient feature is the key to popularize its applications in 3D domain. In this paper, we present a novel framework to learn and extract local deep feature (LDF), which encodes multiple low-level descriptors and provides high-discriminative representation of local region on 3D shape. The framework consists of four main steps. First, several basic descriptors are calculated and encapsulated to generate geometric bag-of-words in order to make full use of the various basic descriptors׳ properties. Then 3D mesh is down-sampled to hundreds of feature points for accelerating the model learning. Next, in order to preserve the local geometric information and establish the relationships among points in a local area, the geometric bag-of-words are encoded into local geodesic-aware bag-of-features (LGA-BoF). However, the resulting feature is redundant, which leads to low discriminative and efficiency. Therefore, in the final step, we use deep belief networks (DBNs) to learn a model, and use it to generate the LDF, which is high-discriminative and effective for 3D shape applications. 3D shape correspondence and symmetry detection experiments compared with related feature descriptors are carried out on several datasets and shape recognition is also conducted, validating the proposed local deep feature learning framework.
DeepID-Net: multi-stage and deformable deep convolutional neural networks for object detection. In this paper, we propose multi-stage and deformable deep convolutional neural networks for object detection. This new deep learning object detection diagram has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. With the proposed multi-stage training strategy, multiple classifiers are jointly optimized to process samples at different difficulty levels. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of modeling averaging. The proposed approach ranked \#2 in ILSVRC 2014. It improves the mean averaged precision obtained by RCNN, which is the state-of-the-art of object detection, from $31\%$ to $45\%$. Detailed component-wise analysis is also provided through extensive experimental evaluation.
How to Construct Deep Recurrent Neural Networks. In this paper, we explore different ways to extend a recurrent neural network (RNN) to a \textit{deep} RNN. We start by arguing that the concept of depth in an RNN is not as clear as it is in feedforward neural networks. By carefully analyzing and understanding the architecture of an RNN, however, we find three points of an RNN which may be made deeper; (1) input-to-hidden function, (2) hidden-to-hidden transition and (3) hidden-to-output function. Based on this observation, we propose two novel architectures of a deep RNN which are orthogonal to an earlier attempt of stacking multiple recurrent layers to build a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996). We provide an alternative interpretation of these deep RNNs using a novel framework based on neural operators. The proposed deep RNNs are empirically evaluated on the tasks of polyphonic music prediction and language modeling. The experimental result supports our claim that the proposed deep RNNs benefit from the depth and outperform the conventional, shallow RNNs.
Sparse deep belief net model for visual area V2 Motivated in part by the hierarchical organization of the cortex, a number of al- gorithms have recently been proposed that try to learn hierarchical, or "deep," structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed in visual area V1 (and the cochlea), little attempt has been made thus far to evaluate these algorithms in terms of their fidelity for mimicking computations at deeper levels in the cortical hier- archy. This paper presents an unsupervised learning model that faithfully mimics certain properties of visual area V2. Specifically, we develop a sparse variant of the deep belief networks of Hinton et al. (2006). We learn two layers of nodes in the network, and demonstrate that the first layer, similar to prior work on sparse coding and ICA, results in localized, oriented, edge filters, similar to the Gabor functions known to model V1 cell receptive fields. Further, the second layer in our model encodes correlations of the first layer responses in the data. Specifically, it picks up both colinear ("contour") features as well as corners and junctions. More interestingly, in a quantitative comparison, the encoding of these more complex "corner" features matches well with the results from the Ito & Komatsu's study of biological V2 responses. This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features.
Evaluating collaborative filtering recommender systems Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.
Learning Multilevel Distributed Representations for High-Dimensional Sequences We describe a new family of non-linear sequence models that are substantially more powerful than hidden Markov models or linear dynamical sys- tems. Our models have simple approximate in- ference and learning procedures that work well in practice. Multilevel representations of sequen- tial data can be learned one hidden layer at a time, and adding extra hidden layers improves the resulting generative models. The models can be trained with very high-dimensional, very non-linear data such as raw pixel sequences. Their performance is demonstrated using syn- thetic video sequences of two balls bouncing in a box.
Deep learning from temporal coherence in video This work proposes a learning method for deep architectures that takes advantage of sequential data, in particular from the temporal coherence that naturally exists in unlabeled video recordings. That is, two successive frames are likely to contain the same object or objects. This coherence is used as a supervisory signal over the unlabeled data, and is used to improve the performance on a supervised task of interest. We demonstrate the effectiveness of this method on some pose invariant object and face recognition tasks.
Computing Circumscription Revisited: A Reduction Algorithm In recent years, a great deal of attention has been devoted to logics of common-sense reasoning. Among the candidates proposed, circumscription has been perceived as an elegant mathematical technique for modeling nonmonotonic reasoning, but difficult to apply in practice. The major reason for this is the second-order nature of circumscription axioms and the difficulty in finding proper substitutions of predicate expressions for predicate variables. One solution to this problem is to compile, where possible, second-order formulas into equivalent first-order formulas. Although some progress has been made using this approach, the results are not as strong as one might desire and they are isolated in nature. In this article, we provide a general method that can be used in an algorithmic manner to reduce certain circumscription axioms to first-order formulas. The algorithm takes as input an arbitrary second-order formula and either returns as output an equivalent first-order formula, or terminates with failure. The class of second-order formulas, and analogously the class of circumscriptive theories that can be reduced, provably subsumes those covered by existing results. We demonstrate the generality of the algorithm using circumscriptive theories with mixed quantifiers (some involving Skolemization), variable constants, nonseparated formulas, and formulas with n-ary predicate variables. In addition, we analyze the strength of the algorithm, compare it with existing approaches, and provide formal subsumption results.
Zoned-RAID The RAID (Redundant Array of Inexpensive Disks) system has been widely used in practical storage applications for better performance, cost effectiveness, and reliability. This study proposes a novel variant of RAID named Zoned-RAID (Z-RAID). Z-RAID improves the performance of traditional RAID by utilizing the zoning property of modern disks which provides multiple zones with different data transfer rates within a disk. Z-RAID levels 1, 5, and 6 are introduced to enhance the effective data transfer rate of RAID levels 1, 5, and 6, respectively, by constraining the placement of data blocks in multizone disks. We apply the Z-RAID to a practical and popular application, streaming media server, that requires a high-data transfer rate as well as a high reliability. The analytical and experimental results demonstrate the superiority of Z-RAID to conventional RAID. Z-RAID provides a higher effective data transfer rate in normal mode with no disadvantage. In the presence of a disk failure, Z-RAID still performs as well as RAID.
Logical Preference Representation and Combinatorial Vote We introduce the notion of combinatorial vote, where a group of agents (or voters) is supposed to express preferences and come to a common decision concerning a set of non-independent variables to assign. We study two key issues pertaining to combinatorial vote, namely preference representation and the automated choice of an optimal decision. For each of these issues, we briefly review the state of the art, we try to define the main problems to be solved and identify their computational complexity.
Relating equivalence and reducibility to sparse sets For various polynomial-time reducibilities r, the authors ask whether being r-reducible to a sparse set is a broader notion than being r-equivalent to a sparse set. Although distinguishing equivalence and reducibility to sparse sets, for many-one or 1-truth-table reductions, would imply that P≠NP, the authors show that for k-truth-table reductions, k⩾2, equivalence and reducibility to sparse sets provably differ. Though R. Gavalda and D. Watanabe have shown that, for any polynomial-time computable unbounded function f(·), some sets f(n)-truth-table reducible to sparse sets are not even Turing equivalent to sparse sets, the authors show that extending their result to the 2-truth-table case would provide a proof that P≠NP. Additionally, the authors study the relative power of different notions of reducibility and show that disjunctive and conjunctive truth-table reductions to sparse sets are surprisingly powerful, refuting a conjecture of K. Ko (1989)
Unsupervised (Parameter) Learning For Mrfs On Bipartite Graphs We consider unsupervised (parameter) learning for general Markov random fields on bipartite graphs. This model class includes Restricted Boltzmann Machines. We show that besides the widely used stochastic gradient approximation (a.k.a. Persistent Contrastive Divergence) there is an alternative learning approach - a modified EM algorithm which is tractable because of the bipartiteness of the model graph. We compare the resulting double loop algorithm and the PCD learning experimentally and show that the former converges faster and more stable than the latter.
1.222
0.222
0.074
0.0444
0.020213
0.002198
0.000667
0.000125
0.000027
0
0
0
0
0
Similarity query processing using disk arrays Similarity queries are fundamental operations that are used extensively in many modern applications, whereas disk arrays are powerful storage media of increasing importance. The basic trade-off in similarity query processing in such a system is that increased parallelism leads to higher resource consumptions and low throughput, whereas low parallelism leads to higher response times. Here, we propose a technique which is based on a careful investigation of the currently available data in order to exploit parallelism up to a point, retaining low response times during query processing. The underlying access method is a variation of the R*-tree, which is distributed among the components of a disk array, whereas the system is simulated using event-driven simulation. The performance results conducted, demonstrate that the proposed approach outperforms by factors a previous branch-and-bound algorithm and a greedy algorithm which maximizes parallelism as much as possible. Moreover, the comparison of the proposed algorithm to a hypothetical (non-existing) optimal one (with respect to the number of disk accesses) shows that the former is on average two times slower than the latter.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
DRPM: dynamic speed control for power management in server class disks A large portion of the power budget in server environments goes into the I/O subsystem - the disk array in particular. Traditional approaches to disk power management involve completely stopping the disk rotation, which can take a considerable amount of time, making them less useful in cases where idle times between disk requests may not be long enough to outweigh the overheads. This paper presents a new approach called DRPM to modulate disk speed (RPM) dynamically, and gives a practical implementation to exploit this mechanism. Extensive simulations with different workload and hardware parameters show that DRPM can provide significant energy savings without compromising much on performance. This paper also discusses practical issues when implementing DRPM on server disks.
SODA: sensitivity based optimization of disk architecture Storage plays a pivotal role in the performance of many applications. Optimizing disk architectures is a design-time as well as a run-time issue and requires balancing between performance, power and capacity. The design space is large and there are many "knobs" that can be used to optimize disk drive behavior. Here we present a sensitivity-based optimization for disk architectures (SODA) which leverages results from digital circuit design. Using detailed models of the electro-mechanical behavior of disk drives and a suite of realistic workloads, we show how SODA can aid in design and runtime optimization.
Intra-disk Parallelism: An Idea Whose Time Has Come Server storage systems use a large number of disks to achieve high performance, thereby consuming a significant amount of power. In this paper, we propose to significantly reduce the power consumed by such storage systems via intra-disk parallelism, wherein disk drives can exploit parallelism in the I/O request stream. Intra-disk parallelism can facilitate replacing a large disk array with a smaller one, using the minimum number of disk drives needed to satisfy the capacity requirements. We show that the design space of intra-disk parallelism is large and present a taxonomy to formulate specific implementations within this space. Using a set of commercial workloads, we perform a limit study to identify the key performance bottlenecks that arise when we replace a storage array that is tuned to provide high performance with a single high-capacity disk drive. We show that it is possible to match, and even surpass, the performance of a storage array for these workloads by using a single disk drive of sufficient capacity that exploits intra-disk parallelism, while significantly reducing the power consumed by the storage system. We evaluate the performance and power consumption of disk arrays composed of intra-disk parallel drives, and discuss engineering and cost issues related to the implementation and deployment of such disk drives.
EERAID: energy efficient redundant and inexpensive disk array Recent research works have been presented on conserving energy for multi-disk systems either at a single disk drive level or at a storage system level and thereby having certain limitations. This paper studies several new redundancy-based, power-aware, I/O request scheduling and cache management policies at the RAID controller level to build energy-efficient RAID systems, by exploiting the redundant information and destage issues of the array for two popular RAID levels, RAID 1 and RAID 5. For RAID 1, we develop a Windowed Round Robin (WRR) request scheduling policy; for RAID 5, we introduce a N-chance Power Aware cache replacement algorithm (NPA) for writes and a Power-Directed, Transformable (PDT) request scheduling policy for reads. Trace-driven simulation proves EERAID saves much more energy than legacy RAIDs and existing solutions.
Reducing Energy Consumption of Disk Storage Using Power-Aware Cache Management Reducing energy consumption is an important issue for data centers. Among the various components of a data center, storage is one of the biggest consumers of energy. Previous studies have shown that the average idle period for a server disk in a data center is very small compared to the time taken to spin down and spin up. This significantly limits the effectiveness of disk power management schemes. This paper proposes several power-aware storage cache management algorithms that provide more opportunities for the underlying disk power management schemes to save energy. More specifically, we present an off-line power-aware greedy algorithm that is more energy-efficient than Beladyýs off-line algorithm (which minimizes cache misses only). We also propose an online power-aware cache replacement algorithm. Our trace-driven simulations show that, compared to LRU, our algorithm saves 16% more disk energy and provides 50% better average response time for OLTP I/O workloads. We have also investigated the effects of four storage cache write policies on disk energy consumption.
Power-aware storage cache management Reducing energy consumption is an important issue for data centers. Among the various components of a data center, storage is one of the biggest energy consumers. Previous studies have shown that the average idle period for a server disk in a data center is very small compared to the time taken to spin down and spin up. This significantly limits the effectiveness of disk power management schemes. This article proposes several power-aware storage cache management algorithms that provide more opportunities for the underlying disk power management schemes to save energy. More specifically, we present an offline energy-optimal cache replacement algorithm using dynamic programming, which minimizes the disk energy consumption. We also present an offline power-aware greedy algorithm that is more energy-efficient than Belady's offline algorithm (which minimizes cache misses only). We also propose two online power-aware algorithms, PA-LRU and PB-LRU. Simulation results with both a real system and synthetic workloads show that, compared to LRU, our online algorithms can save up to 22 percent more disk energy and provide up to 64 percent better average response time. We have also investigated the effects of four storage cache write policies on disk energy consumption.
AFRAID: a frequently redundant array of independent disks Disk arrays are commonly designed to ensure that stored data will always be able to withstand a disk failure, but meeting this goal comes at a significant cost in performance. We show that this is unnecessary. By trading away a fraction of the enormous reliability provided by disk arrays, it is possible to achieve performance that is almost as good as a non-parity-protected set of disks. In particular, our AFRAID design eliminates the small-update penalty that plagues traditional RAID 5 disk arrays. It does this by applying the data update immediately, but delaying the parity update to the next quiet period between bursts of client activity. That is, AFRAID makes sure that the array is frequently redundant, even if it isn't always so. By regulating the parity update policy, AFRAID allows a smooth trade-off between performance and availability. Under real-life workloads, the AFRAID design can provide close to the full performance of an array of unprotected disks, and data availability comparable to a traditional RAID 5. Our results show that AFRAID offers 42% better performance for only 10% less availability, 97% better for 23% less, and as much as a factor of 4.1 times better performance for giving up less than half RAID 5's availability. We explore here the detailed availability and performance implications of the AFRAID approach.
AMP: adaptive multi-stream prefetching in a shared cache Prefetching is a widely used technique in modern data storage systems. We study the most widely used class of prefetching algorithms known as sequential prefetching. There are two problems that plague the state-of-the-art sequential prefetching algorithms: (i) cache pollution, which occurs when prefetched data replaces more useful prefetched or demand-paged data, and (ii) prefetch wastage, which happens when prefetched data is evicted from the cache before it can be used. A sequential prefetching algorithm can have a fixed or adaptive degree of prefetch and can be either synchronous (when it can prefetch only on a miss), or asynchronous (when it can also prefetch on a hit). To capture these distinctions we define four classes of prefetching algorithms: Fixed Synchronous (FS), Fixed Asynchronous (FA), Adaptive Synchronous (AS), and Adaptive Asynchronous (AA). We find that the relatively unexplored class of AA algorithms is in fact the most promising for sequential prefetching. We provide a first formal analysis of the criteria necessary for optimal throughput when using an AA algorithm in a cache shared by multiple steady sequential streams. We then provide a simple implementation called AMP, which adapts accordingly leading to near optimal performance for any kind of sequential workload and cache size. Our experimental set-up consisted of an IBM xSeries 345 dual processor server running Linux using five SCSI disks. We observe that AMP convincingly outperforms all the contending members of the FA, FS, and AS classes for any number of streams, and over all cache sizes. As anecdotal evidence, in an experiment with 100 concurrent sequential streams and varying cache sizes, AMP beats the FA, FS, and AS algorithms by 29-172%, 12-24%, and 21-210% respectively while outperforming OBL by a factor of 8. Even for complex workloads like SPC1-Read, AMP is consistently the best performing algorithm. For the SPC2 Video-on-Demand workload, AMP can sustain at least 25% more streams than the next best algorithm. Finally, for a workload consisting of short sequences, where optimality is more elusive, AMP is able to outperform all the other contenders in overall performance.
An Analysis of Prepaging An Analysis of Prepaging. Prepaging is advocated as a technique to reduce the excessive page traffic due to the changes in the phases of execution of a program. Common prepaging techniques are surveyed. It is advocated that the phase transition behavior cannot be adequately predicted based either on the spatial contiguity or on the observation of the past behavior. Prepaging advice generated by the programmer or the compiler is presented as a technique for the prediction of the phase transition behavior. To simplify the generation of the prepaging advice, the processes of the extraction of the phase transition behavior and the schedufing of the page transfers are decoup~ed. This, in turn, dictates the need for controlled prepaging, which is discussed next. / A performance comparison of a sequential prepaging scheme and a user-aided prepaging scheme is carried out. The relative space-time product is presented as a measure of the effectiveness of a prepaging scheme. The effect of prepaging on system throughput is studied using a cyclic queuing model. Eine Analyse des Prepagings. Dieser Bericht pl~diert fiir Prepaging als eine Technik, die den tiberm/igig groBen Seiten-Austausch-Verkehr w~ihrend des Umschaltens zwischen zwei Phasen eines Programmes reduzieren kann. Es wird angeffihrt, dab sich das Verhalten wghrend eines Phasen- iiberganges weder gestiitzt auf r/iumliche Kontinuit(it noch auf Beobachtung des vergangenen Ver- haltens voraussagen lgBt. Als LSsung werden von Programmierer oder Compiler erzeugte Prepaging- Hinweise angeboten. Um das Erstellen solcher Hinweise zu erleichtern, werden die Vorg~inge zttm Erkennen des Ubergangsverhaltens und zur Seitenverwaltung voneinander gelrst. Dies fordert wiederum eine kontrollierte Prepaging-Technik, die im folgenden besprochen wird. Ein sequentielles und ein benutzergestiitztes Prepaging-Schema werden :verglichen. Das relative Raum-Zeit-Produkt dient als MaB fiir die Wirksamkeit. Die Auswirkung von Prepaging auf den System-Durchsatz wird an einem Warteschlangenmodell untersucht.
Minerva: An automated resource provisioning tool for large-scale storage systems Enterprise-scale storage systems, which can contain hundreds of host computers and storage devices and up to tens of thousands of disks and logical volumes, are difficult to design. The volume of choices that need to be made is massive, and many choices have unforeseen interactions. Storage system design is tedious and complicated to do by hand, usually leading to solutions that are grossly over-provisioned, substantially under-performing or, in the worst case, both.To solve the configuration nightmare, we present minerva: a suite of tools for designing storage systems automatically. Minerva uses declarative specifications of application requirements and device capabilities; constraint-based formulations of the various sub-problems; and optimization techniques to explore the search space of possible solutions.This paper also explores and evaluates the design decisions that went into Minerva, using specialized micro- and macro-benchmarks. We show that Minerva can successfully handle a workload with substantial complexity (a decision-support database benchmark). Minerva created a 16-disk design in only a few minutes that achieved the same performance as a 30-disk system manually designed by human experts. Of equal importance, Minerva was able to predict the resulting system's performance before it was built.
A file is not a file: understanding the I/O behavior of Apple desktop applications We analyze the I/O behavior of iBench, a new collection of productivity and multimedia application workloads. Our analysis reveals a number of differences between iBench and typical file-system workload studies, including the complex organization of modern files, the lack of pure sequential access, the influence of underlying frameworks on I/O patterns, the widespread use of file synchronization and atomic operations, and the prevalence of threads. Our results have strong ramifications for the design of next generation local and cloud-based storage systems.
OBDD-Based Universal Planning for Multiple Synchronized Agents in Non-Deterministic Domains Recently model checking representation and search techniques were shown to be efficiently applicable to planning, in particular to non-deterministic planning. Such planning approaches use Ordered Binary Decision Diagrams (OBDDs) to encode a planning domain as a non-deterministic finite automaton and then apply fast algorithms from model checking to search for a solution. OBDDs can effectively scale and can provide universal plans for complex planning ;domains. We are particularly interested in addressing the complexities arising in non-deterministic, multi-agent domains. In this article, we present UMOP, a new universal OBDD-based planning framework for non-deterministic, multi-agent domains. we introduce a new planning domain description language, NADL, to specify non-deterministic, multi-agent domains. The language contributes the explicit definition of controllable agents and uncontrollable environment agents. We describe the syntax and semantics of NADL and show how to build an efficient OBDD-based representation of an NADL description. The UMOP planning system uses NADL and different OBDD-based universal planning algorithms. It includes the previously developed strong and strong cyclic planning algorithms. In addition, we introduce our new optimistic planning algorithm that relaxes optimality guarantees and generates plausible universal plans in some domains were no strong nor strong cyclic solution exists. We present empirical results applying UMOP to domains ranging from deterministic and single-agent with no environment actions to non-deterministic and multi-agent with complex environment actions. UMOP is shown to be a rich and efficient planning system.
Transformation Invariant Autoassociation with Application to Handwritten Character Recognition When training neural networks by the classical backpropagation algo- rithm the whole problem to learn must be expressed by a set of inputs and desired outputs. However, we often have high-level knowledge about the learning problem. In optical character recognition (OCR), for in- stance, we know that the classification should be invariant under a set of transformations like rotation or translation. We propose a new mo dular classification system based on several autoassociative multilayer percep- trons which allows the efficient incorporation of such knowledge. Results are reported on the NIST database of upper case handwritten letters and compared to other approaches to the invariance problem.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.007873
0.014741
0.014741
0.008169
0.005524
0.002674
0.000973
0.000245
0.000055
0.000019
0.000001
0
0
0
GPU-BLAST: using graphics processors to accelerate protein sequence alignment. Motivation: The Basic Local Alignment Search Tool (BLAST) is one of the most widely used bioinformatics tools. The widespread impact of BLAST is reflected in over 53 000 citations that this software has received in the past two decades, and the use of the word 'blast' as a verb referring to biological sequence comparison. Any improvement in the execution speed of BLAST would be of great importance in the practice of bioinformatics, and facilitate coping with ever increasing sizes of biomolecular databases. Results: Using a general-purpose graphics processing unit (GPU), we have developed GPU-BLAST, an accelerated version of the popular NCBI-BLAST. The implementation is based on the source code of NCBI-BLAST, thus maintaining the same input and output interface while producing identical results. In comparison to the sequential NCBI-BLAST, the speedups achieved by GPU-BLAST range mostly between 3 and 4.
Towards systolic hardware acceleration for local complexity analysis of massive genomic data Modern biological research has greatly benefited from genomics. Such research however requires extensive computational power, traditionally employed on large-scale cluster machines as well as multi-core systems. Recent research in reconfigurable architectures however suggests that FPGA-based acceleration of genomic algorithms greatly improves the performance and energy efficiency when compared to multi-core systems and clusters. In this work, we present an initial attempt for massive systolic acceleration of the popular CAST algorithm employed by biologists for complexity analysis of genomic data. CAST is used for detecting (and subsequently masking) low-complexity regions (LCRs) in protein sequences. We designed and implemented a high-performance hardware-accelerated version of CAST for which we built an FPGA prototype, and benchmarked its performance against serial and multithreaded versions of the CAST algorithm in software. The proposed architecture achieves remarkable speedup compared to both serial and multithreaded CAST implementations ranging from approx. 100x-9500x, depending on the dataset features, such as low-complexity content and sequence length distribution. Such performance may enable complex analyses of voluminous sequence datasets, and has the potential to interoperate with other hardware architectures for protein sequence analysis.
A Review of Hardware Acceleration for Computational Genomics Hardware accelerators are becoming increasingly commonplace in delivering high performance computing solutions at a fraction of the cost of conventional supercomputers and standalone CPU clusters, despite the additional programming effort required to utilize them. This paper provides a survey on the use of hardware accelerators such as FPGAs and GPUs in the area of biological sequence analysis, particularly in the domain of computational genomics. We also survey research on hardware acceleration in response to emerging trends in high-throughput sequencing, and applications enabled by it. We conclude the survey with remarks on how these trends influence the use of hardware acceleration in bioinformatics, and the role of recently developed or soon to be released accelerator technologies.
cuBLASTP: Fine-Grained Parallelization of Protein Sequence Search on CPU+GPU. BLAST, short for Basic Local Alignment Search Tool, is a ubiquitous tool used in the life sciences for pairwise sequence search. However, with the advent of next-generation sequencing (NGS), whether at the outset or downstream from NGS, the exponential growth of sequence databases is outstripping our ability to analyze the data. While recent studies have utilized the graphics processing unit (GPU)...
G-BLASTN: accelerating nucleotide alignment by graphics processors. Motivation: Since 1990, the basic local alignment search tool (BLAST) has become one of the most popular and fundamental bioinformatics tools for sequence similarity searching, receiving extensive attention from the research community. The two pioneering papers on BLAST have received over 96 000 citations. Given the huge population of BLAST users and the increasing size of sequence databases, an urgent topic of study is how to improve the speed. Recently, graphics processing units (GPUs) have been widely used as low-cost, high-performance computing platforms. The existing GPU-BLAST is a promising software tool that uses a GPU to accelerate protein sequence alignment. Unfortunately, there is still no GPU-accelerated software tool for BLAST-based nucleotide sequence alignment. Results: We developed G-BLASTN, a GPU-accelerated nucleotide alignment tool based on the widely used NCBI-BLAST. G-BLASTN can produce exactly the same results as NCBI-BLAST, and it has very similar user commands. Compared with the sequential NCBI-BLAST, G-BLASTN can achieve an overall speedup of 14.80X under 'megablast' mode. More impressively, it achieves an overall speedup of 7.15X over the multithreaded NCBI-BLAST running on 4 CPU cores. When running under 'blastn' mode, the overall speedups are 4.32X (against 1-core) and 1.56X (against 4-core). G-BLASTN also supports a pipeline mode that further improves the overall performance by up to 44% when handling a batch of queries as a whole. Currently G-BLASTN is best optimized for databases with long sequences. We plan to optimize its performance on short database sequences in our future work.
Massively parallel genomic sequence search on the Blue Gene/P architecture This paper presents our first experiences in mapping and optimizing genomic sequence search onto the massively parallel IBM Blue Gene/P (BG/P) platform. Specifically, we performed our work on mpiBLAST, a parallel sequence-search code that has been optimized on numerous supercomputing environments. In doing so, we identify several critical performance issues. Consequently, we propose and study different approaches for mapping sequence-search and parallel I/O tasks on such massively parallel architectures. We demonstrate that our optimizations can deliver nearly linear scaling (93% efficiency) on up to 32,768 cores of BG/P. In addition, we show that such scalability enables us to complete a large-scale bioinformatics problem --- sequence searching a microbial genome database against itself to support the discovery of missing genes in genomes --- in only a few hours on BG/P. Previously, this problem was viewed as computationally intractable in practice.
Mercury BLASTP: Accelerating Protein Sequence Alignment Large-scale protein sequence comparison is an important but compute-intensive task in molecular biology. BLASTP is the most popular tool for comparative analysis of protein sequences. In recent years, an exponential increase in the size of protein sequence databases has required either exponentially more running time or a cluster of machines to keep pace. To address this problem, we have designed and built a high-performance FPGA-accelerated version of BLASTP, Mercury BLASTP. In this article, we describe the architecture of the portions of the application that are accelerated in the FPGA, and we also describe the integration of these FPGA-accelerated portions with the existing BLASTP software. We have implemented Mercury BLASTP on a commodity workstation with two Xilinx Virtex-II 6000 FPGAs. We show that the new design runs 11--15 times faster than software BLASTP on a modern CPU while delivering close to 99% identical results.
Unsupervised Learning of Multiple Motifs in Biopolymers Using Expectation Maximization The MEME algorithm extends the expectation maximization (EM) algorithm for identifying motifs in unaligned biopolymer sequences. The aim of MEME is to discover new motifs in a set of biopolymer sequences where little or nothing is known in advance about any motifs that may be present. MEME innovations expand the range of problems which can be solved using EM and increase the chance of finding good solutions. First, subsequences which actually occur in the biopolymer sequences are used as starting points for the EM algorithm to increase the probability of finding globally optimal motifs. Second, the assumption that each sequence contains exactly one occurrence of the shared motif is removed. This allows multiple appearances of a motif to occur in any sequence and permits the algorithm to ignore sequences with no appearance of the shared motif, increasing its resistance to noisy data. Third, a method for probabilistically erasing shared motifs after they are found is incorporated so that several distinct motifs can be found in the same set of sequences, both when different motifs appear in different sequences and when a single sequence may contain multiple motifs. Experiments show that MEME can discover both the CRP and LexA binding sites from a set of sequences which contain one or both sites, and that MEME can discover both the −10 and −35 promoter regions in a set of E. coli sequences.
Applications of circumscription to formalizing common-sense knowledge Abstract We present a new and more symmetric version of the circumscription method of nonmonotonic reasoning rst described in (McCarthy 1980) and some applications to formalizing common,sense knowledge. The applications in this paper are mostly based on minimizing the abnormality of dieren t aspects of various entities. Included are nonmonotonic treatments of is-a hierarchies, the unique names hypothesis, and the frame problem. The new circumscription may be called formula circumscription to distinguish it from the previously dened domain circumscription and predicate circumscription. A still more general formalism called prioritized circumscription is briey explored.
Perceiving And Reasoning About A Changing World A rational agent (artificial or otherwise) residing in a complex changing environment must gather information perceptually, update that information as the world changes, and combing that information with causal information to reason about the changing world. Using the system of defeasible reasoning that is incorporated into the OSCAR architecture for rational agents, a set of reason-schemas is proposed for enabling an agent to perform some of the requisite reasoning. Along the way, solutions are proposed for the Frame Problem, the Qualification Problem, and the Ramification Problem. The principles and reasoning described have all been implemented in OSCAR.
Two forms of dependence in propositional logic: controllability and definability We investigate two forms of dependence between variables and/or formulas within a propositional knowledge base: controllability (a set of variables X controls a formula 驴 if there is a way to fix the truth value of the variables in X in order to achieve 驴 to have a prescribed truth value) and definability (X defines a variable y if every truth assignment of the variables in X enables us finding out the truth value of y). Several characterization results are pointed out, complexity issues are analyzed, and some applications of both notions, including decision under incomplete knowledge and/or partial observability, and hypothesis discrimination, are sketched.
On subclasses of minimal unsatisfiable formulas We consider the minimal unsatisfiablity problem MU ( k ) for propositional formulas in conjunctive normal form (CNF) over n variables and n + k clauses, where k is fixed. k is called the difference. Any formula in MU ( k ) can be split into two minimal unsatisfiable formula. For such splittings we investigate the size of the differences of the resulting formulas in comparison to the difference of the initial formula. Based on these results we prove that MU ( k ) for fixed k is in NP, and for MU (2) we present a simple and unique characterization.
Two-Layer Multiple Kernel Learning.
Improving Citation Polarity Classification With Product Reviews Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification.
1.021412
0.0248
0.02
0.016
0.010354
0.005793
0.002622
0.00005
0
0
0
0
0
0
Deep Network with Support Vector Machines.
Stacked generalization This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers. Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second space whose inputs are (for example) the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is (for example) the correct guess. When used with multiple generalizers, stacked generalization can be seen as a more sophisticated version of cross-validation, exploiting a strategy more sophisticated than cross-validation's crude winner-takes-all for combining the individual generalizers. When used with a single generalizer, stacked generalization is a scheme for estimating (and then correcting for) the error of a generalizer which has been trained on a particular learning set and then asked a particular question. After introducing stacked generalization and justifying its use, this paper presents two numerical experiments. The first demonstrates how stacked generalization improves upon a set of separate generalizers for the NETtalk task of translating text to phonemes. The second demonstrates how stacked generalization improves the performance of a single surface-fitter. With the other experimental evidence in the literature, the usual arguments supporting cross-validation, and the abstract justifications presented in this paper, the conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate. This paper ends by discussing some of the variations of stacked generalization, and how it touches on other fields like chaos theory.
Practical Issues in Temporal Difference Learning This paper examines whether temporal difference methods for training connectionist networks, such as Sutton's TD(λ) algorithm, can be successfully applied to complex real-world problems. A number of important practical issues are identified and discussed from a general theoretical perspective. These practical issues are then examined in the context of a case study in which TD(λ) is applied to learning the game of backgammon from the outcome of self-play. This is apparently the first application of this algorithm to a complex non-trivial task. It is found that, with zero knowledge built in, the network is able to learn from scratch to play the entire game at a fairly strong intermediate level of performance, which is clearly better than conventional commercial programs, and which in fact surpasses comparable networks trained on a massive human expert data set. This indicates that TD learning may work better in practice than one would expect based on current theory, and it suggests that further analysis of TD methods, as well as applications in other complex domains, may be worth investigating.
Deep4MalDroid: A Deep Learning Framework for Android Malware Detection Based on Linux Kernel System Call Graphs With explosive growth of Android malware and due to its damage to smart phone users (e.g., stealing user credentials, resource abuse), Android malware detection is one of the cyber security topics that are of great interests. Currently, the most significant line of defense against Android malware is anti-malware software products, such as Norton, Lookout, and Comodo Mobile Security, which mainly use the signature-based method to recognize threats. However, malware attackers increasingly employ techniques such as repackaging and obfuscation to bypass signatures and defeat attempts to analyze their inner mechanisms. The increasing sophistication of Android malware calls for new defensive techniques that are harder to evade, and are capable of protecting users against novel threats. In this paper, we propose a novel dynamic analysis method named Component Traversal that can automatically execute the code routines of each given Android application (app) as completely as possible. Based on the extracted Linux kernel system calls, we further construct the weighted directed graphs and then apply a deep learning framework resting on the graph based features for newly unknown Android malware detection. A comprehensive experimental study on a real sample collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed method outperforms other alternative Android malware detection techniques. Our developed system Deep4MalDroid has also been integrated into a commercial Android anti-malware software.
Representational power of restricted boltzmann machines and deep belief networks Deep belief networks (DBN) are generative neural network models with many layers of hidden explanatory factors, recently introduced by Hinton, Osindero, and Teh (2006) along with a greedy layer-wise unsupervised learning algorithm. The building block of a DBN is a probabilistic model called a restricted Boltzmann machine (RBM), used to represent one layer of the model. Restricted Boltzmann machines are interesting because inference is easy in them and because they have been successfully used as building blocks for training deeper models. We first prove that adding hidden units yields strictly improved modeling power, while a second theorem shows that RBMs are universal approximators of discrete distributions. We then study the question of whether DBNs with more layers are strictly more powerful in terms of representational power. This suggests a new and less greedy criterion for training RBMs within DBNs.
LIBSVM: A library for support vector machines LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators Statistical and computational concerns have motivated parameter estimators based on various forms of likelihood, e.g., joint, conditional, and pseudolikelihood. In this paper, we present a unified framework for studying these estimators, which allows us to compare their relative (statistical) efficiencies. Our asymptotic analysis suggests that modeling more of the data tends to reduce variance, but at the cost of being more sensitive to model misspecification. We present experiments validating our analysis.
Describing Visual Scenes Using Transformed Objects and Parts We develop hierarchical, probabilistic models for objects, the parts composing them, and the visual scenes surrounding them. Our approach couples topic models originally developed for text analysis with spatial transformations, and thus consistently accounts for geometric constraints. By building integrated scene models, we may discover contextual relationships, and better exploit partially labeled training images. We first consider images of isolated objects, and show that sharing parts among object categories improves detection accuracy when learning from few examples. Turning to multiple object scenes, we propose nonparametric models which use Dirichlet processes to automatically learn the number of parts underlying each object category, and objects composing each scene. The resulting transformed Dirichlet process (TDP) leads to Monte Carlo algorithms which simultaneously segment and recognize objects in street and office scenes.
Decoding by linear programming This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f∈Rn from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1-minimization problem (||x||ℓ1:=Σi|xi|) min(g∈Rn) ||y - Ag||ℓ1 provided that the support of the vector of errors is not too large, ||e||ℓ0:=|{i:ei ≠ 0}|≤ρ·m for some ρ0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of ℓ1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.
Tree clustering for constraint networks The paper offers a systematic way of regrouping constraints into hierarchical structures capable of supporting search without backtracking. The method involves the formation and preprocessing of an acyclic database that permits a large variety of queries and local perturbations to be processed swiftly, either by sequential backtrack-free procedures, or by distributed constraint propagation processes.
Disk-directed I/O for MIMD multiprocessors Many scientific applications that run on today's multiprocessors, such as weather forecasting and seismic analysis, are bottlenecked by their file-I/O needs. Even if the multiprocessor is configured with sufficient I/O hardware, the file system software often fails to provide the available bandwidth to the application. Although libraries and enhanced file system interfaces can make a significant improvement, we believe that fundamental changes are needed in the file server software. We propose a new technique, disk-directed I/O, to allow the disk servers to determine the flow of data for maximum performance. Our simulations show that tremendous performance gains are possible both for simple reads and writes and for an out-of-core application. Indeed, our disk-directed I/O technique provided consistent high performance that was largely independent of data distribution and obtained up to 93% of peak disk bandwidth. It was as much as 18 times faster than either a typical parallel file system or a two-phase-I/O library.
Parallel software development using an object-oriented modelling technique A significant amount of interest is currently being shown in the relationship between the paradigms of object-orientation and concurrency. This stems from the observation that objects display a great deal of concurrent behaviour in the way they can co-exist with one another. As a result, much research effort has gone into exploiting this relationship, primarily in the development of programming languages specifically aimed at producing parallel software. However, the exploitation of the object-oriented paradigm in the analysis and design of parallel software has not seen the same level of interest. This work presents an investigation into adopting object-oriented approaches during the analysis and design of parallel software by taking a well established object modelling method (OMT) and extending it using the PARSE process graph notation to account for the added dimensions of concurrency. This hybrid method is analysed and discussed by way of the development and implementation of a common parallel software scenario. The results of this exercise show that adopting an object-oriented view at the analysis and design stage of development can benefit the production of such a parallel software solution.
The MHETA Execution Model for Heterogeneous Clusters The availability of inexpensive "off the shelf" machines increases the likelihood that parallel programs run on heterogeneous clusters of machines. These programs are increasingly likely to be out of core, meaning that portions of their datasets must be stored on disk during program execution. This results in significant, per-iteration, I/O cost.This paper describes an execution model, called MHETA, which is the key component to finding an effective data distribution on heterogeneous clusters. MHETA takes into account computation, communication, and I/O costs of iterative scientific applications. MHETA uses automatically extracted information from a single iteration to predict the execution time of the remaining iterations. Results show that MHETA predicts with on average 98% accuracy the execution time of several scientific benchmarks (with and without prefetching) and one full-scale scientific program that utilize pipelined and other communication. MHETA is thus an effective tool when searching for the most effective distribution on a heterogeneous cluster.
Anatomical Structure Sketcher For Cephalograms By Bimodal Deep Learning The lateral cephalogram is a commonly used medium to acquire patient-specific morphology for diagnose and treatment planning in clinical dentistry. The robust anatomical structure detection and accurate annotation remain challenging considering the personal skeletal variations and image blurs caused by device-specific projection magnification, together with structure overlapping in the lateral cephalograms. We propose a novel cephalogram sketcher system, where the contour extraction of anatomical structures is formulated as a cross-modal morphology transfer from regular image patches to arbitrary curves. Specifically, the image patches of structures of interest are located by a hierarchical pictorial model. The automatic contour sketcher converts the image patch to a morphable boundary curve via a bimodal deep Boltzmann machine. The deep machine learns a joint representation of patch textures and contours, and forms a path from one modality (patches) to the other (contours). Thus, the sketcher can infer the contours by alternating Gibbs sampling along the path in a manner similar to the data completion. The proposed method is robust not only to structure detection, but also tends to produce accurate structure shapes and landmarks even in blurry X-ray images. The experiments performed on clinically captured cephalograms demonstrate the effectiveness of our method.
1.105
0.025002
0.021007
0.01
0.004151
0.001923
0.00042
0.00001
0.000001
0
0
0
0
0
System identification using evolutionary Markov chain Monte Carlo System identification involves determination of the functional structure of a target system that underlies the observed data. In this paper, we present a probabilistic evolutionary method that optimizes system architectures for the identification of unknown target systems. The method is distinguished from existing evolutionary algorithms (EAs) in that the individuals are generated from a probability distribution as in Markov chain Monte Carlo (MCMC). It is also distinguished from conventional MCMC methods in that the search is population-based as in standard evolutionary algorithms. The effectiveness of this hybrid of evolutionary computation and MCMC is tested on a practical problem, i.e., evolving neural net architectures for the identification of nonlinear dynamic systems. Experimental evidence supports that evolutionary MCMC (or eMCMC) exploits the efficiency of simple evolutionary algorithms while maintaining the robustness of MCMC methods and outperforms either approach used alone.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Information and control in gray-box systems In modern systems, developers are often unable to modify the underlying operating system. To build services in such an environment, we advocate the use of gray-box techniques. When treating the operating system as a gray-box, one recognizes that not changing the OS restricts, but does not completely obviate, both the information one can acquire about the internal state of the OS and the control one can impose on the OS. In this paper, we develop and investigate three gray-box Information and Control Layers (ICLs) for determining the contents of the file-cache, controlling the layout of files across local disk, and limiting process execution based on available memory. A gray-box ICL sits between a client and the OS and uses a combination of algorithmic knowledge, observations, and inferences to garner information about or control the behavior of a gray-box system. We summarize a set of techniques that are helpful in building gray-box ICLs and have begun to organize a "gray toolbox" to ease the construction of ICLs. Through our case studies, we demonstrate the utility of gray-box techniques, by implementing three useful "OS-like" services without the modification of a single line of OS source code.
KNOWAC: I/O Prefetch via Accumulated Knowledge The lasting memory-wall problem combined with the newly emerged big-data problem makes data access delay the first citizen of performance optimizations of cluster computing. Reduction of data access delay, however, is application dependent. It depends on the data access behaviors of the underlying applications. Therefore, leaning and understanding data access behaviors is a must for effective data access optimizations. Modern microprocessors are equipped with hardware data prefetchers, which predict data access patterns and prefetch data for CPU. However, memory systems in design do not have the capability to understand data access behaviors for performance optimizations. In this study, we propose a novel approach, named KNOWAC, to collect I/O information automatically through high-level I/O libraries. KNOWAC accumulates I/O knowledge and reveals data usage patterns by exploring the collected high-level I/O characteristics. The discovered data usage patterns can be used for different I/O optimizations. We apply KNOWAC to I/O prefetch under the framework of PnetCDF in this study. Experimental results on a real-world application show that KNOWAC is promising and has a true practical value in mitigating the I/O bottleneck.
CloSpan: Mining Closed Sequential Patterns in Large Databases Previous sequential pattern mining algorithms mine the full set of frequent subsequences satisfying a min-sup threshold in a sequence database. However, since a frequent long sequence contains a combinatorial number of frequent subsequences, such mining will generate an explosive number of frequent subsequences for long patterns, which is prohibitively expensive in both time and space. In this paper, we propose an alternative but equally powerful solution: instead of mining the complete set of frequent subsequences, we mine frequent closed subsequences only, i.e., those containing no super-sequence with the same sup-port (i.e., occurrence frequency). By exploring novel global optimization techniques, an efficient algorithm, called CloSpan (Closed Sequential pattern mining) is developed, which outperforms the previous work by one order of magnitude. Moreover, CloSpan can mine really long sequences, which, to the best of our knowledge, is un-minable by previous algorithms. Finally, CloSpan produces a significantly less number of discovered sequences than the traditional (i.e., full-set) methods while preserving the same expressive power since the whole set of frequent subsequences, together with their supports, can be derived easily from our mining results.
Simultaneous Pipelining in QPipe: Exploiting Work Sharing Opportunities Across Queries Data warehousing and scientific database applications operate on massive datasets and are characterized by complex queries accessing large portions of the database. Concurrent queries often exhibit high data and computation overlap, e.g., they access the same relations on disk, compute similar aggregates, or share intermediate results. Unfortunately, run-time sharing in modern database engines is limited by the paradigm of invoking an independent set of operator instances per query, potentially missing sharing opportunities if the buffer pool evicts data early.
LiveJournal's Backend and memcached: Past, Present, and Future
Increasing predictive accuracy by prefetching multiple program and user specific files Recent increases in CPU performance have outpaced increases in hard drive performance. As a result, disk operations have become more expensive in terms of CPU cycles spent waiting for disk operations to complete. File prediction can mitigate this problem by prefetching files into cache before they are accessed However, incorrect prediction is to a certain degree both unavoidable and costly. We present the Program-based and User-based Last n Successors (PULnS) file prediction model that identifies relationships between files through the names of the programs and the users accessing them. Our simulation results show that, in the worst case, PULnS makes at least 20% fewer incorrect predictions and roughly the same number of correct predictions as the last-successor model.
An evaluation of buffer management strategies for relational database systems In this paper we present a new algorithm, DBMIN, for managing the buffer pool of a relational database management system. DBMIN is based on a new model of relational query behavior, the (QLSM). Like the hot set model, the QLSM has an advantage over the stochastic models due to its ability to predict future reference behavior. However, the QLSM avoids the potential problems of the hot set model by separating the modeling of reference behavior from any particular buffer management algorithm. After introducing the QLSM and describing the DBMIN algorithm, we present a performance evaluation methodology for evaluating buffer management algorithms in a multiuser environment. This methodology employed a hybrid model that combines features of both trace-driven and distribution-driven simulation models. Using this model, the performance of the DBMIN algorithm in a multiuser environment is compared with that of the hot set algorithm and four more traditional buffer replacement algorithms.
Automated hoarding for mobile computers A common problem facing mobile computing is &connected op-eration, or computing in the absence of a network. Hoarding eases disconnected operation by selecting a subset of the user's files forlo-cal storage. We describe a hoarding system that can operate without user intervention, by observing user activity and predicting future needs. The system calculates a new measure, semantic distance, between individual liles, and uses this to feed a clustering algori-thm that chooses which files should be hoarded. A separatereplica-tion system manages the actual transport of data; any of a number of replication systems may be used. We discuss practical problems encountered in the real world and present usage statistics showing that our system outperforms previous approaches by factors that can exceed 1O:l.
Practical prefetching via data compression An important issue that affects response time performance in current OODB and hypertext systems is the I/O involved in moving objects from slow memory to cache. A promising way to tackle this problem is to use prefetching, in which we predict the user's next page requests and get those pages into cache in the background. Current databases perform limited prefetching using techniques derived from older virtual memory systems. A novel idea of using data compression techniques for prefetching was recently advocated in [KrV, ViK], in which prefetchers based on the Lempel-Ziv data compressor (the UNIX compress command) were shown theoretically to be optimal in the limit. In this paper we analyze the practical aspects of using data compression techniques for prefetching. We adapt three well-known data compressors to get three simple, deterministic, and universal prefetchers. We simulate our prefetchers on sequences of page accesses derived from the OO1 and OO7 benchmarks and from CAD applications, and demonstrate significant reductions in fault-rate. We examine the important issues of cache replacement, size of the data structure used by the prefetcher, and problems arising from bursts of “fast” page requests (that leave virtually no time between adjacent requests for prefetching and book keeping). We conclude that prediction for prefetching based on data compression techniques holds great promise.
Transforming policies into mechanisms with infokernel We describe an evolutionary path that allows operating systems to be used in a more flexible and appropriate manner by higher-level services. An infokernel exposes key pieces of information about its algorithms and internal state; thus, its default policies become mechanisms, which can be controlled from user-level. We have implemented two prototype infokernels based on the linuxtwofour and netbsdver kernels, called infolinux and infobsd, respectively. The infokernels export key abstractions as well as basic information primitives. Using infolinux, we have implemented four case studies showing that policies within Linux can be manipulated outside of the kernel. Specifically, we show that the default file cache replacement algorithm, file layout policy, disk scheduling algorithm, and TCP congestion control algorithm can each be turned into base mechanisms. For each case study, we have found that infokernel abstractions can be implemented with little code and that the overhead and accuracy of synthesizing policies at user-level is acceptable.
RISC: A resilient interconnection network for scalable cluster storage systems The explosive growth of data generated by information digitization has been identified as the key driver to escalate storage requirements. It is becoming a big challenge to design a resilient and scalable interconnection network which consolidates hundreds even thousands of storage nodes to satisfy both the bandwidth and storage capacity requirements. This paper proposes a resilient interconnection network for storage cluster systems (RISC). The RISC divides storage nodes into multiple partitions to facilitate the data access locality. Multiple spare links between any two storage nodes are employed to offer strong resilience to reduce the impact of the failures of links, switches, and storage nodes. The scalability is guaranteed by plugging in additional switches and storage nodes without reconfiguring the overall system. Another salient feature is that the RISC achieves a dynamic scalability of resilience by expanding the partition size incrementally with additional storage nodes along with associated two network interfaces that expand resilience degree and balance workload proportionally. A metric named resilience coefficient is proposed to measure the interconnection network. A mathematical model and the corresponding case study are employed to illustrate the practicability and efficiency of the RISC.
The Logic of Persistence A recent paper (Hanks19851 examines temporal rea- soning as an example of default reasoning. They conclude that all current systems of default reasoning, including non-monotonic logic, default logic, and circumscription, are inadequate for reasoning about persistence. I present a way of representing persistence in a framework based on a generalization of circumscription, which captures Hanks and McDermott's procedural representation. 1. Persistence
Disaster recovery techniques for database systems The widespread use of computers has brought about revolutionary changes in society. Computers are becoming vital in all aspects of human life, whether employed in life-critical systems such as air traffic control and autopilot navigation control systems, or in point-of-sales management systems and cinema ticket purchasing systems. Data stored in computer systems is often a company’s most valuable asset, one that must be protected at all costs. Businesses also must be prepared to provide continued service in case of a disaster. Fault-tolerance techniques have been employed to increase computer system availability, and to reduce the damage caused by component failure. Vital data is stored on stable storage, which survives failures such as electrical outages or system crashes. Also, redundant copies of data can be placed on multiple stable storage devices. This approach protects data if failures in storage media are independent, but may be ineffective if disaster strikes. Recall the 1906 earthquake in San Francisco, which destroyed more than half the city. When the U.S. Federal Building in Oklahoma City was bombed, data as well as on-site backups were destroyed. Since data losses and system unavailability resulting from a disaster cripple the operation of an organization, federal legislation now requires the development of recovery plans [5]. Extensive backup procedures have been developed to protect against data losses during disasters, such as the grandfather-father-son backup procedure, the incremental logging technique, and the data image dumping method. In addition to guarding against data losses, a system must also provide its normal services after a disaster strikes. Thus, as with data, computer hardware must also be replicated.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.014058
0.014815
0.011401
0.011348
0.011348
0.005694
0.003051
0.001635
0.000332
0.000046
0.000002
0
0
0
Adaptive, transparent CPU scaling algorithms leveraging inter-node MPI communication regions Although users of high-performance computing are most interested in raw performance, both energy and power consumption have become critical concerns. Because the CPU is often the major power consumer, some microprocessors allow frequency and voltage scaling, which enables a system to efficiently reduce CPU performance and power. When the CPU is not on the critical path, such dynamic frequency and voltage scaling can produce significant energy savings with little performance penalty. This paper presents an MPI runtime system that dynamically reduces CPU frequency and voltage during communication phases in MPI programs. It dynamically identifies such phases and, without a priori knowledge, selects the CPU frequency in order to minimize energy-delay product. All analysis and subsequent frequency and voltage scaling is within MPI and so is entirely transparent to the application. This means that the large number of existing MPI programs, as well as new ones being developed, can use our system without modification. Results show that the median reduction in energy-delay product for twelve benchmarks is 8%, the median energy reduction is 11%, and the median increase in execution time increase is only 2%.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Local Deep-Feature Alignment for Unsupervised Dimension Reduction. This paper presents an unsupervised deep-learning framework named local deep-feature alignment (LDFA) for dimension reduction. We construct neighbourhood for each data sample and learn a local stacked contractive auto-encoder (SCAE) from the neighbourhood to extract the local deep features. Next, we exploit an affine transformation to align the local deep features of each neighbourhood with the gl...
Coupled Deep Autoencoder for Single Image Super-Resolution. Sparse coding has been widely applied to learning-based single image super-resolution (SR) and has obtained promising performance by jointly learning effective representations for low-resolution (LR) and high-resolution (HR) image patch pairs. However, the resulting HR images often suffer from ringing, jaggy, and blurring artifacts due to the strong yet ad hoc assumptions that the LR image patch r...
Learning Deep Architectures for AI Theoretical results suggest that in order to learn the kind of com-plicated functions that can represent high-level abstractions (e.g., invision, language, and other AI-level tasks), one may needdeep architec-tures. Deep architectures are composed of multiple levels of non-linearoperations, such as in neural nets with many hidden layers or in com-plicated propositional formulae re-using many sub-formulae. Searchingthe parameter space of deep architectures is a difficult task, but learningalgorithms such as those for Deep Belief Networks have recently beenproposed to tackle this problem with notable success, beating the state-of-the-art in certain areas. This monograph discusses the motivationsand principles regarding learning algorithms for deep architectures, inparticular those exploiting as building blocks unsupervised learning ofsingle-layer models such as Restricted Boltzmann Machines, used toconstruct deeper models such as Deep Belief Networks.
Extended stable semantics for normal and disjunctive programs
The nature of statistical learning theory~. First Page of the Article
A machine program for theorem-proving The programming of a proof procedure is discussed in connection with trial runs and possible improvements.
An Introduction to Least Commitment Planning Recent developments have clarified the process of generating partially ordered, partially specified sequences of actions whose execution will achieve an agent's goal. This article summarizes a progression of least commitment planners, starting with one that handles the simple STRIPS representation and ending with UCOPOP a planner that manages actions with disjunctive precondition, conditional effects, and universal quantification over dynamic universes. Along the way, I explain how Chapman's formulation of the modal truth criterion is misleading and why his NP-completeness result for reasoning about plans with conditional effects does not apply to UCOPOP.
Equilibria and steering laws for planar formations This paper presents a Lie group setting for the problem of control of formations, as a natural outcome of the analysis of a planar two-vehicle formation control law. The vehicle trajectories are described using the planar Frenet–Serret equations of motion, which capture the evolution of both the vehicle position and orientation for unit-speed motion subject to curvature (steering) control. The set of all possible (relative) equilibria for arbitrary G-invariant curvature controls is described (where G=SE(2) is a symmetry group for the control law), and a global convergence result for the two-vehicle control law is proved. An n-vehicle generalization of the two-vehicle control law is also presented, and the corresponding (relative) equilibria for the n-vehicle problem are characterized. Work is on-going to discover stability and convergence results for the n-vehicle problem.
Pushing the envelope: planning, propositional logic, and stochastic search Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning problems many times faster than the best current planning systems. Although stochastic methods have been shown to be very effective on a wide range of scheduling problems, this is the first demonstration of its power on truly challenging classical planning instances. This work also provides a new perspective on representational issues in planning.
Simultaneous Localization And Mapping With Sparse Extended Information Filters In this paper we describe a scalable algorithm for the simultaneous mapping and localization (SLAM) problem. SLAM is the problem of acquiring a map of a static environment with a mobile robot. The vast majority of SLAM algorithms are based on the extended Kahnan filter (EKF), In this paper we advocate an algorithm that relies on the dual of the EKE the extended information filter (EIF). We show that when represented in the information form, map posteriors are dominated by a small number of links that tie together nearby features in the map. This insight is developed into a sparse variant of the EIF called the sparse extended information filter (SEIF). SEIFs represent maps by graphical networks of features that are locally interconnected, where links represent relative information between pairs of nearby,features, as well as information about the robot's pose relative to the map. We show that all essential update equations in SEIFs can be executed in constant time, irrespective of the size of the map. We also provide empirical results obtained for a benchmark data set collected in an outdoor environment, and using a multi-robot mapping simulation.
A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system.
iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings.
When Multivariate Forecasting Meets Unsupervised Feature Learning - Towards a Novel Anomaly Detection Framework for Decision Support. Many organizations adopt information technologies to make intelligent decisions during operations. Time-series data plays a crucial role in supporting such decision making processes. Though current studies on time-series based decision making provide reasonably well results, the anomaly detection essence underling most of the scenarios and the plenitude of unlabeled data are largely overlooked and left unexplored. We argue that by using multivariate forecasting and unsupervised feature learning, these two important research gaps could be filled. We carried out two experiments in this study to testify our approach and the results showed that decision support performance was significantly improved. We also proposed a novel framework to integrate the two methods so that our approach may be generalized to a larger problem domain. We discussed the advantages, the limitations and the future work of our study. Both practical and theoretical contributions were also discussed in the paper. © 2012 by the AIS/ICIS Administrative Office All rights reserved.
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
1.1
0.033333
0.000098
0
0
0
0
0
0
0
0
0
0
0
A Blended Deep Learning Approach for Predicting User Intended Actions User intended actions are widely seen in many areas. Forecasting these actions and taking proactive measures to optimize business outcome is a crucial step towards sustaining the steady business growth. In this work, we focus on predicting attrition, which is one of typical user intended actions. Conventional attrition predictive modeling strategies suffer a few inherent drawbacks. To overcome these limitations, we propose a novel end-to-end learning scheme to keep track of the evolution of attrition patterns for the predictive modeling. It integrates user activity logs, dynamic and static user profiles based on multi-path learning. It exploits historical user records by establishing a decaying multi-snapshot technique. And finally it employs the precedent user intentions via guiding them to the subsequent learning procedure. As a result, it addresses all disadvantages of conventional methods. We evaluate our methodology on two public data repositories and one private user usage dataset provided by Adobe Creative Cloud. The extensive experiments demonstrate that it can offer the appealing performance in comparison with several existing approaches as rated by different popular metrics. Furthermore, we introduce an advanced interpretation and visualization strategy to effectively characterize the periodicity of user activity logs. It can help to pinpoint important factors that are critical to user attrition and retention and thus suggests actionable improvement targets for business practice. Our work will provide useful insights into the prediction and elucidation of other user intended actions as well.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Online Remote Data Backup for iSCSI-Based Storage Systems Data reliability is critical to many data sensitive applications, especially to the emerging storage over the network. In this paper, we have proposed a method to perform remote online data backup to improve reliability for iSCSI based networked storage systems. The basic idea is to apply the traditional RAID technology to iSCSI environment. Our first technique is to improve the reliability by mirroring data among several iSCSI targets, and second is to improve the reliability and performance by striping data and rotating parity over several iSCSI targets. Extensive measurement results using Iozone have shown that both techniques can provide comparable performance while improving reliability.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Representing action in extended logic programs
Maintainability: A Weaker Stabilizability Like Notion for High Level Control The goal of most agents is not just to reach a goal state, but rather also (or alternatively) to put restrictions on its trajec- tory, in terms of states it must avoid and goals that it must 'maintain'. This is analogous to the notions of 'safety' and 'stability' in the discrete event systems and temporal logic community. In this paper we argue that the notion of 'stability' is too strong for formulating 'maintenance' goals of an agent - in particular, reactive and software agents, and give examples of such agents. We present a weaker notion of 'maintainability' and show that our agents which do not satisfy the stability cri- teria, do satisfy the weaker criteria. We give algorithms to test maintainability, and also to generate control for maintainabil- ity. We then develop the notion of 'supportability' that gen- eralizes both 'maintainability' and 'stabilizability, develop an automata theory that distinguishes between exogenous and control actions, and develop a temporal logic based on it.
Invariance, Maintenance, and Other Declarative Objectives of Triggers - A Formal Characterization of Active Databases In this paper we take steps towards a systematic design of active features in an active database. We propose having declarative specifications that specify the objective of an active database and formulate the correctness of triggers with respect to such specifications. In the process we distinguish between the notions of 'invariance' and 'maintenance' and propose four different classes of specification constraints. We also propose three different types of triggers with distinct purposes and show through the analysis of an example from the literature, the correspondence between these trigger types and the specification classes. Finally, we briefly introduce the notion of k-maintenance that is important from the perspective of a reactive (active database) system.
Default Theory for Well Founded Semantics with Explicit Negation One aim of this paper is to define a default theory for Well Founded Semantics of logic programs which have been extended with explicit negation, such that the models of a program correspond exactly to the extensions of the default theory corresponding to the program.
Optimistic and Disjunctive Agent Design Problems The agent design problem is as follows:Given an environment, together with a specification of a task, is it possible to construct an agent that can be guaranteed to successfully accomplish the task in the environment? In previous research, it was shown that for two important classes of tasks (where an agent was required to either achieve some state of affairs or maintain some state of affairs), the agent design problem was PSPACE-complete. In this paper, we consider several important generalisations of such tasks. In an optimistic agent design problem, we simply ask whether an agent has at least some chance of bringing about a goal state. In a combined design problem, an agent is required to achieve some state of affairs while ensuring that some invariant condition is maintained. Finally, in a disjunctive design problem, we are presented with a number of goals and corresponding invariants--the aim is to design an agent that on any given run, will achieve one of the goals while maintaining the corresponding invariant. We prove that while the optimistic achievement and maintenance design problems are np-complete, the pspace-completeness results obtained for achievement and maintenance tasks generalise to combined and disjunctive agent design.
Relating logic programming theories of actions and partial order planning In this paper we argue that logic programming theories of action allow us to identify subclasses for which the corresponding logic program has nice properties (such as acyclicity). As an example we extend the action description language {\mathcal A} to allow executability conditions and show its formalization in logic programming. We show the relationship between the execution of partial order planners and the SLDNF tree with respect to the corresponding logic programs. In the end we briefly discuss how this relationship helps us in extending partial order planners to extended languages by following the corresponding logic program.
Planning for temporally extended goals In planning, goals have traditionally been viewed as specifying a set of desirable final states. Any plan that transforms the current state to one of these desirable states is viewed to be correct. Goals of this form are limited in what they can specify, and they also do not allow us to constrain the manner in which the plan achieves its objectives. We propose viewing goals as specifying desirable sequences of states, and a plan to be correct if its execution yields one of these desirable sequences. We present a logical language, a temporal logic, for specifying goals with this semantics. Our language is rich and allows the representation of a range of temporally extended goals, including classical goals, goals with temporal deadlines, quantified goals (with both universal and existential quantification), safety goals, and maintenance goals. Our formalism is simple and yet extends previous approaches in this area. We also present a planning algorithm that can generate correct plans for these goals. This algorithm has been implemented, and we provide some examples of the formalism at work. The end result is a planning system which can generate plans that satisfy a novel and useful set of conditions.
Actions with Indirect Effects (Preliminary Report)
Formulating diagnostic problem solving using an action language with narratives and sensing Given a system and unexpected observations about the system, a diagnosis is often viewed as a fault assignment to the various components of the system that is consistent with (or that explains) the observations. If the observations occur over time, and if we allow the occurrence of (deliberate) actions and (exogenous) events, then the traditional notion of a candidate diagnosis must be modified to consider the possible occurrence of actions and events that could account for the unexpected...
Narrative based Postdictive Reasoning for Cognitive Robotics. Making sense of incomplete and conflicting narrative knowledge in the presence of abnormalities, unobservable processes, and other real world considerations is a challenge and crucial requirement for cognitive robotics systems. An added challenge, even when suitably specialised action languages and reasoning systems exist, is practical integration and application within large-scale robot control frameworks. In the backdrop of an autonomous wheelchair robot control task, we report on application-driven work to realise postdiction triggered abnormality detection and re-planning for real-time robot control: (a) Narrative-based knowledge about the environment is obtained via a larger smart environment framework; and (b) abnormalities are postdicted from stable-models of an answer-set program corresponding to the robot's epistemic model. The overall reasoning is performed in the context of an approximate epistemic action theory based planner implemented via a translation to answer-set programming.
On deciding subsumption problems Subsumption is an important redundancy elimination method in automated deduction. A clause D is subsumed by a set $$\\mathcal{C}$$ of clauses if there is a clause C ¿ $$\\mathcal{C}$$ and a substitution ¿ such that the literals of C¿ are included in D. In the field of automated model building, subsumption has been modified to an even stronger redundancy elimination method, namely the so-called clausal H-subsumption. Atomic H-subsumption emerges from clausal H-subsumption by restricting D to an atom and $$\\mathcal{C}$$ to a set of atoms. Both clausal and atomic H-subsumption play an indispensable key role in automated model building. Moreover, problems equivalent to atomic H-subsumption have been studied with different terminologies in many areas of computer science. Both clausal and atomic H-subsumption are known to be intractable, i.e., ¿ p 2 -complete and NP-complete, respectively. In this paper, we present a new approach to deciding (clausal and atomic) H-subsumption that is based on a reduction to QSAT2 and SAT, respectively.
Implementation of Argus Argus is a programming language and system developed to support the construction and execution of distributed programs. This paper describes the implementation of Argus, with particular emphasis on the way we implement atomic actions, because this is where Argus differs most from other implemented systems. The paper also discusses the performance of Argus. The cost of actions is quite reasonable, indicating that action systems like Argus are practical.
Dynamically partitioning for solving QBF In this paper we present a new technique to solve Quantified Boolean Formulas (QBF). Our technique applies the idea of dynamic partitioning to QBF solvers. Dynamic partitioning has previously been utilized in #SAT solvers that count the number of models of a propositional formula. One of the main differences with the #SAT case comes from the solution learning techniques employed in search based QBF solvers. Extending solution learning to a partitioning solver involves some considerable complexities which we show how to resolve. We have implemented our ideas in a new QBF solver, and demonstrate that dynamic partitioning is able to increase the performance of search based solvers, sometimes significantly. Empirically our new solver offers performance that is superior to other search based solvers and in many cases superior to nonsearch based solvers.
Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture.
1.010033
0.017391
0.017391
0.008912
0.005797
0.004348
0.002813
0.001747
0.000452
0.000025
0.000001
0
0
0
Exploiting type-awareness in a self-recovering disk Data recoverability in the face of partial disk errors is an important prerequisite in modern storage. We have designed and implemented a prototype disk system that automatically ensures the integrity of stored data, and transparently recovers vital data in the event of integrity violations. We show that by using pointer knowledge, effective integrity assurance can be performed inside a block-based disk with negligible performance overheads. We also show how semantics-aware replication of blocks can help improve the recoverability of data in the event of partial disk errors with small space overheads. Our evaluation results show that for normal user workloads, our disk system has a performance overhead of only 1-5% compared to traditional disks.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Morpheme-based feature-rich language models using Deep Neural Networks for LVCSR of Egyptian Arabic. Egyptian Arabic (EA) is a colloquial version of Arabic. It is a low-resource morphologically rich language that causes problems in Large Vocabulary Continuous Speech Recognition (LVCSR). Building LMs on morpheme level is considered a better choice to achieve higher lexical coverage and better LM probabilities. Another approach is to utilize information from additional features such as morphological tags. On the other hand, LMs based on Neural Networks (NNs) with a single hidden layer have shown superiority over the conventional n-gram LMs. Recently, Deep Neural Networks (DNNs) with multiple hidden layers have achieved better performance in various tasks. In this paper, we explore the use of feature-rich DNN-LMs, where the inputs to the network are a mixture of words and morphemes along with their features. Significant Word Error Rate (WER) reductions are achieved compared to the traditional word-based LMs.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Multi-disk B-trees
Declustering techniques for parallelizing temporal access structures This paper addresses the issues of declustering temporal index and access structures for a single processor multiple independent disk architecture. The temporal index is the Monotonic B+-Tree which uses the time index temporal access structure. We devise a new algorithm, called multi-level round robin, for assigning tree nodes to multiple disks. The multi-level round robin declustering technique takes advantage of the append-only nature of temporal databases to achieve uniform load distribution, decrease response time, and increase the fanout of the tree by eliminating the need to store disk numbers within the tree nodes. We propose two declustering techniques for the time index access structures; one considers only time proximity while declustering, whereas the other considers both time proximity and data size. We investigate their performance over different types of temporal queries and show that various temporal queries have conflicting allocation criteria for the time index buckets. In addition, we devise two disk partition techniques for the time index buckets. The mutually exclusive technique partitions the disks into disjoint groups, whereas the shared disk technique allows the different types of buckets to share all disks
Optimal disk allocation for partial match queries The problem of disk allocation addresses the issue of how to distribute a file on several disks in order to maximize concurrent disk accesses in response to a partial match query. In this paper a coding-theoretic analysis of this problem is presented, and both necessary and sufficient conditions for the existence of strictly optimal allocation methods are provided. Based on a class of optimal codes, known as maximum distance separable codes, strictly optimal allocation methods are constructed. Using the necessary conditions proved, we argue that the standard definition of strict optimality is too strong and cannot be attained, in general. Hence, we reconsider the definition of optimality. Instead of basing it on an abstract definition that may not be attainable, we propose a new definition based on the best possible allocation method. Using coding theory, allocation methods that are optimal according to our proposed criterion are developed.
Hamming Filters: A Dynamic Signature File Organization for Parallel Stores
The gamma database machine project ABSTRACT This paper describes the design of the Gamma,database machine,and the techniques employed,in its implementation. Gamma,is a relational database machine,currently operating on an Intel iPSC/2 hypercube,with 32 processors and 32 disk drives. Gamma,employs,three key technical ideas which enable the architecture to be scaled to 100s of processors. First, all relations are horizontally partitioned across multiple disk drives enabling relations to be scanned in parallel. Second, novel parallel algorithms based on hashing are used to implement the complex relational operators such as join and aggregate functions. Third, dataflow scheduling techniques are used to coordinate multioperator queries. By using these techniques it is possible to control the execution of very complex,queries with minimal coordination - a necessity for configurations involving a very large number,of processors. In addition to describing the design of the Gamma software, a thorough performance evaluation of the iPSC/2
On Periodic Resource scheduling for Continuous-Media Databases.
The DASDBS Project: Objectives, Experiences, and Future Prospects A retrospective of the Darmstadt database system project, also known as DASDBS, is presented. The project is aimed at providing data management support for advanced applications, such as geo-scientific information systems and office automation. Similar to the dichotomy of RSS and RDS in System R, a layered architectural approach was pursued: a storage management kernel serves as the lowest common denominator of the requirements of the various applications classes, and a family of application-oriented front-ends provides semantically richer functions on top of the kernel. The lessons that were learned from building the DASDBS system are discussed. Particular emphasis is placed on the following issues: the role of nested relations, the experiences with using object buffers for coupling the system with the programming-language environment and the learning process in implementing multilevel transactions.
Parallelism in relational database management systems In order to provide real-time responses to complex queries involving large volumes of data, it has become necessary to exploit parallelism in query processing. This paper addresses the issues and solutions relating to intraquery parallelism in a relational database management system (DBMS). We provide a broad framework for the study of the numerous issues that need to be addressed in supporting parallelism efficiently and flexibly. The alternatives for a parallel architecture system are discussed, followed by the focus on how a query can be parallelized and how that affects load balancing of the different tasks created. The final part of the paper contains information about how the IBM DATABASE 2™ (DB2®) Version 3 product provides support for I/O parallelism to reduce response time for data-intensive queries.
On-Demand Data Elevation in Hierarchical Multimedia Storage Servers Given the present cost of memories and the very large storage and bandwidth requirements of large-scale multimedia databases, hierarchical storage servers (which consist of RAM; disk storage, and robot-based tertiary libraries) are becoming increasingly popular. However, related research is scarce and employs tertiary storage for storage augmentation purposes only. This work, exploiting the ever-increasing performance offered by (particularly) modern tape library products, aims to utilize tertiary storage in order to augment the system's performance. We consider the issue of elevating continuous data from its permanent place in tertiary for display purposes. Our primary goals are to save on the secondary storage bandwidth that traditional techniques require for the display of continuous objects, while requiring no additional RAM buffer space. To this end we develop algorithms for sharing the responsibility for the playback between the secondary and tertiary devices and for placing the blocks of continuous objects on tapes, and show how they achieve the above goals. We study these issues for different commercial tape library products with different bandwidth and tape capacity and in environments with and without the multiplexing of tape libraries.
Reducing Energy Consumption of Disk Storage Using Power-Aware Cache Management Reducing energy consumption is an important issue for data centers. Among the various components of a data center, storage is one of the biggest consumers of energy. Previous studies have shown that the average idle period for a server disk in a data center is very small compared to the time taken to spin down and spin up. This significantly limits the effectiveness of disk power management schemes. This paper proposes several power-aware storage cache management algorithms that provide more opportunities for the underlying disk power management schemes to save energy. More specifically, we present an off-line power-aware greedy algorithm that is more energy-efficient than Beladyýs off-line algorithm (which minimizes cache misses only). We also propose an online power-aware cache replacement algorithm. Our trace-driven simulations show that, compared to LRU, our algorithm saves 16% more disk energy and provides 50% better average response time for OLTP I/O workloads. We have also investigated the effects of four storage cache write policies on disk energy consumption.
Investigation of leading HPC I/O performance using a scientific-application derived benchmark With the exponential growth of high-fidelity sensor and simulated data, the scientific community is increasingly reliant on ultrascale HPC resources to handle their data analysis requirements. However, to utilize such extreme computing power effectively, the I/O components must be designed in a balanced fashion, as any architectural bottleneck will quickly render the platform intolerably inefficient. To understand I/O performance of data-intensive applications in realistic computational settings, we develop a lightweight, portable benchmark called MADbench2, which is derived directly from a large-scale Cosmic Microwave Background (CMB) data analysis package. Our study represents one of the most comprehensive I/O analyses of modern parallel filesystems, examining a broad range of system architectures and configurations, including Lustre on the Cray XT3 and Intel Itanium2 cluster; GPFS on IBM Power5 and AMD Opteron platforms; two BlueGene/L installations utilizing GPFS and PVFS2 filesystems; and CXFS on the SGI Altix3700. We present extensive synchronous I/O performance data comparing a number of key parameters including concurrency, POSIX- versus MPI-IO, and unique- versus shared-file accesses, using both the default environment as well as highly-tuned I/O parameters. Finally, we explore the potential of asynchronous I/O and quantify the volume of computation required to hide a given volume of I/O. Overall our study quantifies the vast differences in performance and functionality of parallel filesystems across state-of-the-art platforms, while providing system designers and computational scientists a lightweight tool for conducting further analyses.
Dynamic resource allocation for database servers running on virtual storage As increasing amounts of valuable information are produced and persist digitally, the ability to determine the origin of data becomes important. In science, medicine, commerce, and government, data provenance tracking is essential for rights protection, ...
Homomorphisms of conjunctive normal forms We study homomorphisms of propositional formulas in CNF generalizing symmetries considered by Krishnamurthy. If φ:H → F is a homomorphism, then unsatisfiability of H implies unsatisfiability of F. Homomorphisms from F to a subset F' of F (endomorphisms) are of special interest, since in such cases F and F' are satisfiability-equivalent. We show that the smallest subsets F' of a formula F for which an endomorphism F → F' exists are mutually isomorphic. Furthermore, we study connections between homomorphisms and autark assignments. We introduce the concept of "proof by homomorphism" which is based on the observation that there exist sets Γ of unsatisfiable formulas such that (i) formulas in Γ can be recognized in polynomial time, and (ii) for every unsatisfiable formula F there exist some H ∈ Γ and a homomorphism φ: H → F. We identify several sets Γ of unsatisfiable formulas satisfying (i) and (ii) for which proofs by homomorphism w.r.t. Γ and tree resolution proofs can be simulated by each other in polynomial time.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.076299
0.013495
0.011679
0.009452
0.001149
0.000657
0.000286
0.000121
0.000015
0.000001
0
0
0
0
Optimizing throughout in a workstation-based network file system over a high bandwidth local area network This paper describes methods of optimizing a client/server network file system to advantage of high bandwidth local area networks in a conventional distributed computing environment. The environment contains hardware that removes network and disk bandwidth bottlenecks. The remaining bottlenecks at clients include excessive context switching, inefficient data translation, and cumbersome data encapsulation methods. When these are removed, the null-write performance of a current implementation of Sun's Network File System improves by 30%. A prototype system including a high speed RAM disk demonstrates an 18% improvement in overall write throughput. The prototype system fully utilizes the available peripheral bandwidth of the server.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Red-Black Relaxed Plan Heuristics.
Relaxation of Temporal Planning Problems Relaxation is ubiquitous in the practical resolution of combinatorial problems. If a valid relaxation of an instance has no solution then the original instance has no solution. A tractable relaxation can be built and solved in polynomial time. The most obvious application is the efficient detection of certain unsolvable instances. We review existing relaxation techniques in temporal planning and propose an alternative relaxation inspired by a tractable class of temporal planning problems. Our approach is orthogonal to relaxations based on the ignore-all-deletes approach used in non-temporal planning. We show that our relaxation can even be applied to non-temporal problems, and can also be used to extend a tractable class of temporal planning problems.
Managing Temporal cycles in Planning Problems Requiring Concurrency. To correctly model certain real-world planning problems, it is essential to take into account time. This is the case for problems requiring the concurrent execution of actions (known as temporally expressive problems). In this paper, we define and study the notion of temporally cyclic problems, that is problems involving sets of cyclically dependent actions. We characterize those temporal planning languages, which can express temporally cyclic problems. We also present a polynomial-time algorithm, which transforms a temporally cyclic problem into an equivalent acyclic problem. Applying our transformation allows any temporal planner to solve temporally cyclic problems without explicitly managing cyclicity. We first present our results for temporal PDDL (Planning Domain Description Language) 2.1 and then extend them to a language that allows conditions over arbitrary intervals and effects at arbitrary instants.
Solving Simple Planning Problems with More Inference and No Search Many benchmark domains in AI planning including Blocks, Logistics, Gripper, Satellite, and others lack the interactions that characterize puzzles and can be solved non-optimally in low polynomial time. They are indeed easy problems for people, although as with many other problems in AI, not always easy for machines. In this paper, we address the question of whether simple problems such as these can be solved in a simple way, i.e., without search, by means of a domain-independent planner. We address this question empirically by extending the constraint-based planner CPT with additional domain-independent inference mechanisms. We show then for the first time that these and several other benchmark domains can be solved with no backtracks while performing only polynomial node operations. This is a remarkable finding in our view that suggests that the classes of problems that are solvable without search may be actually much broader than the classes that have been identified so far by work in Tractable Planning.
When is temporal planning really temporal? While even STRIPS planners must search for plans of unbounded length, temporal planners must also cope with the fact that actions may start at any point in time. Most temporal planners cope with this challenge by restricting action start times to a small set of decision epochs, because this enables search to be carried out in state-space and leverages powerful state-based reachability heuristics, originally developed for classical planning. Indeed, decision-epoch planners won the International Planning Competition's Temporal Planning Track in 2002, 2004 and 2006. However, decision-epoch planners have a largely unrecognized weakness: they are incomplete. In order to characterize the cause of incompleteness, we identify the notion of required concurrency, which separates expressive temporal action languages from simple ones. We show that decisionepoch planners are only complete for languages in the simpler class, and we prove that the simple class is 'equivalent' to STRIPS! Surprisingly, no problems with required concurrency have been included in the planning competitions. We conclude by designing a complete state-space temporal planning algorithm, which we hope will be able to achieve high performance by leveraging the heuristics that power decision epoch planners.
Parallel non-binary planning in polynomial time This paper formally presents a class of planning problems which allows non-binary state variables and parallel execution of actions. The class is proven to be tractable, and we provide a sound and complete polynomial time algorithm for planning within this class. This result means that we are getting closed to tackling realistic planning problems in sequential control, where a restricted problem representation is often sufficient, but where the size of the problems make tractability an important issue.
Red-Black Relaxed Plan Heuristics Reloaded.
Parameterized Complexity and Kernel Bounds for Hard Planning Problems The propositional planning problem is a notoriously difficult computational problem. Downey et al. (1999) initiated the parameterized analysis of planning (with plan length as the parameter) and B\"ackstr\"om et al. (2012) picked up this line of research and provided an extensive parameterized analysis under various restrictions, leaving open only one stubborn case. We continue this work and provide a full classification. In particular, we show that the case when actions have no preconditions and at most $e$ postconditions is fixed-parameter tractable if $e\leq 2$ and W[1]-complete otherwise. We show fixed-parameter tractability by a reduction to a variant of the Steiner Tree problem; this problem has been shown fixed-parameter tractable by Guo et al. (2007). If a problem is fixed-parameter tractable, then it admits a polynomial-time self-reduction to instances whose input size is bounded by a function of the parameter, called the kernel. For some problems, this function is even polynomial which has desirable computational implications. Recent research in parameterized complexity has focused on classifying fixed-parameter tractable problems on whether they admit polynomial kernels or not. We revisit all the previously obtained restrictions of planning that are fixed-parameter tractable and show that none of them admits a polynomial kernel unless the polynomial hierarchy collapses to its third level.
In defense of PDDL axioms There is controversy as to whether explicit support for PDDL-like axioms and derived predicates is needed for planners to handle real-world domains effectively. Many researchers have deplored the lack of precise semantics for such axioms, while others have argued that it might be best to compile them away. We propose an adequate semantics for PDDL axioms and show that they are an essential feature by proving that it is impossible to compile them away if we restrict the growth of plans and domain descriptions to be polynomial. These results suggest that adding a reasonable implementation to handle axioms inside the planner is beneficial for the performance. Our experiments confirm this suggestion.
What is planning in the presence of sensing? Despite the existence of programs that are able to generate so-called conditional plans, there has yet to emerge a clear and general specification of what it is these programs are looking for: what exactly is a plan in this setting, and when is it correct? In this paper, we develop and motivate a specification within the situation calculus of conditional and iterative plans over domains that include binary sensing actions. The account is built on an existing theory of action which includes a solution to the frame problem, and an extension to it that handles sensing actions and the effect they have on the knowledge of a robot. Plans are taken to be programs in a new simple robot program language, and the planning task is to find a program that would be known by the robot at the outset to lead to a final situation where the goal is satisfied. This specification is used to analyze the correctness of a small example plan, as well as variants that have redundant or missing sensing actions. We also investigate whether the proposed robot program language is powerful enough to serve for any intuitively achievable goal.
A Relationship between Difference Hierarchies and Relativized Polynomial Hierarchies. Chang and Kadin have shown that if the difference hierarchy over NP collapses to level $k$, then the polynomial hierarchy (PH) is equal to the $k$th level of the difference hierarchy over $\Sigma_{2}^{p}$. We simplify their proof and obtain a slightly stronger conclusion: If the difference hierarchy over NP collapses to level $k$, then PH = $\left(P_{(k-1)-tt}^{NP}\right)^{NP}$. We also extend the result to classes other than NP: For any class $C$ that has $\leq_{m}^{p}$-complete sets and is closed under $\leq_{conj}^{p}$and $\leq_{m}^{NP}$-reductions, if the difference hierarchy over $C$ collapses to level $k$, then $PH^{C} = $\left(P_{(k-1)-tt}^{NP}\right)^{C}$. Then we show that the exact counting class $C_{=}P$ is closed under $\leq_{disj}^{p}$and $\leq_{m}^{co-NP}$-reductions. Consequently, if the difference hierarchy over $C_{=}P$ collapses to level $k$ then $PH^{PP}$ is equal to $\left(P_{(k-1)-tt}^{NP}\right)^{PP}$. In contrast, the difference hierarchy over the closely related class PP is known to collapse. Finally, we consider two ways of relativizing the bounded query class $P_{k-tt}^{NP}$: the restricted relativization $P_{k-tt}^{NP^{C}}$, and the full relativization $\left(P_{k-tt}^{NP}\right)^{C}$. If $C$ is NP-hard, then we show that the two relativizations are different unless $PH^{C}$ collapses.
Counting, Selecting, adn Sorting by Query-Bounded Machines We study the query-complexity of counting, selecting, and sorting functions. That is, for a given set A and a positive integer k, we ask, how many queries to an arbitrary oracle does a polynomial-time machine on input (x 1, x 2,..., x k ) need to determine how many strings of the input are in A. We also ask how many queries are necessary to select a string in A from the input (x 1, x 2,..., x k ) if such a string exists and to sort the input (x 1, x 2,..., x k ) with respect to the ordering x y if and only if x A y A. We obtain optimal query-bounds for these problems, and show that sets for which these functions have a low query-complexity must be easy in some sense. For such sets we obtain optimal placements in the extended low hierarchy. We also show that in the case of NP-complete sets the lower bounds for counting and selecting hold unless P=NP. Finally, we relate these notions to cheatability and p-superterseness. Our results yield as corollaries extensions of previously know results.
Dynamic Data Distribution (D3) in a Shared-Nothing Multiprocessor Data Store
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.107453
0.106818
0.040202
0.035634
0.01775
0.0068
0.002667
0.000593
0.000083
0.000003
0
0
0
0
On the Unique Satisfiability Problem
The Unique Horn-Satisfiability Problem and Quadratic Boolean Equations The unique satisfiability problem for general Boolean expressions has attracted interest in recent years in connection with basic complexity issues [12,13]. We investigate here Unique Horn-Satisfiability, i.e. the subclass of Unique-Sat restricted to Horn expressions. We introduce two operators,reduction andshrinking, each transforming a given Horn expression into another Horn expression involving strictly fewer variables and preserving the unique satisfiability property, if present.
Nondeterministic turing machines with modified acceptance
NP trees and Carnap's modal logic We consider problems and complexity classes definable by interdependent queries to an oracle in NP. How the queries depend on each other is specified by a directed graph G. We first study the class of problems where G is a general dag and show that this class coincides with /spl Delta//sub 2//sup P/. We then consider the class where G is a tree. Our main result states that this class is identical to P/sup NP/ [O(log n)], the class of problems solvable in polynomial time with a logarithmic number of queries to an oracle in NP. Using this result we show that the following problems are all P/sup NP/[O(logn)] complete: validity-checking of formulas in Carnap's modal logic, checking whether a formula is almost surely valid over finite structures in modal logics K, T, and S4, and checking whether a formula belongs to the stable set of beliefs generated by a propositional theory.
On Computing Boolean Connectives of Characteristic Functions This paper is a study of the existence of polynomial time Boolean connective functions for languages. A language L has an AND function if there is a polynomial time f such that f(x, y) ∈ L ⇐⇒ x ∈ L and y ∈ L. L has an OR function if there is a polynomial time g such that g(x, y) ∈ L ⇐⇒ x ∈ L or y ∈ L. While all NP complete sets have these functions, Graph Isomorphism, which is probably not complete, is also shown to have both AND and OR functions. The results in this paper characterize the complete sets for the classes DP and PSAT(O(log n)) in terms of AND and OR, and relate these functions to the structure of the Boolean hierarchy and the query hierarchies. Also, this paper shows that the complete sets for the levels of the Boolean hierarchy above the second level cannot have AND or OR unless the polynomial hierarchy collapses. Finally, most of the structural properties of the Boolean hierarchy and query hierarchies are shown to depend only on the existence of AND and OR functions for the NP complete sets.
On the Power of Deterministic Reductions to C=P The counting class C = P, which captures the notion of "exact counting," while extremely powerful under various nondeterministic reductions, is quite weak under polynomial-time deterministic reductions. We discuss the analogies between NP and co-C = P, which allow us to derive many interesting results for such deterministic reductions to co-C = P. We exploit these results to obtain some interesting oracle separations. Most importantly, we show that there exists an oracle A such that +P(A) not-subset-or-equal-to P(C=PA) and BPP(A) not-subset-or-equal-to P(C=PA). Therefore, techniques that would prove that C = P and PP are polynomial-time Turing equivalent, or that C = P is polynomial-time Turing hard for the polynomial-time hierarchy, would not relativize.
On unique satisfiability and the threshold behavior of randomized reductions The research presented in this paper is motivated by the following new results on the com- plexity of the unique satisfiability problem, USAT. • if USAT ≡Pm USAT, then DP = co-D P and PH collapses. • if USAT ∈ co-DP, then PH collapses. • if USAT has OR!, then PH collapses. The proofs of these results use only the fact that USAT is complete for DP under randomized reductions—even though the probability bound of these reductions may be low. Furthermore, these results show that the structural complexity of USAT and of DP many-one complete sets are very similar, and so they lend support to the argument that even sets complete under "weak" randomized reductions can capture the properties of the many-one complete sets. However, under these "weak" randomized reductions, USAT is complete for PSAT(log n) as well, and in this case, USAT does not capture the properties of the sets many-one complete for PSAT(log n). To explain this anomaly, the concept of the threshold behavior of randomized reductions is developed. Tight bounds on the thresholds are shown for NP, co-NP, DPand co-DP. Furthermore, these results can be generalized to give upper and lower bounds on the thresholds for the Boolean Hierarchy. These upper bounds are expressed in terms of Fibonacci numbers.
The 1-Versus-2 Queries Problem Revisited The 1-versus-2 queries problem, which has been extensively studied in computational complexity theory, asks in its generality whether every efficient algorithm that makes at most 2 queries to a Σ k p -complete language L k has an efficient simulation that makes at most 1 query to L k . We obtain solutions to this problem for hypotheses weaker than previously considered. We prove that: For each k≥2, $\mathrm{P}^{\Sigma^{p}_{k}[2]}_{tt}\subseteq \mathrm{ZPP}^{\Sigma^{p}_{k}[1]}\Rightarrow \mathrm{PH}=\Sigma^{p}_{k}$, and PttNP[2]⊆ZPPNP[1]⇒PH=S2p. Here, for any complexity class $\mathcal{C}$and integer j≥1, we define $\mathrm{ZPP}^{\mathcal{C}[j]}$to be the class of problems solvable by zero-error randomized algorithms that run in polynomial time, make at most j queries to $\mathcal{C}$, and succeed with probability at least 1/2+1/poly(⋅). This same definition of $\mathrm{ZPP}^{\mathcal{C}[j]}$, also considered in Cai and Chakaravarthy (J. Comb. Optim. 11(2):189–202, 2006), subsumes the class of problems solvable by randomized algorithms that always answer correctly in expected polynomial time and make at most j queries to $\mathcal{C}$. Hemaspaandra, Hemaspaandra, and Hempel (SIAM J. Comput. 28(2):383–393, 1998), for k2, and Buhrman and Fortnow (J. Comput. Syst. Sci. 59(2):182–194, 1999), for k=2, had obtained the same consequence as ours in (I) using the stronger hypothesis $\mathrm{P}^{\Sigma^{p}_{k}[2]}_{tt}\subseteq \mathrm{P}^{\Sigma^{p}_{k}[1]}$. Fortnow, Pavan, and Sengupta (J. Comput. Syst. Sci. 74(3):358–363, 2008) had obtained the same consequence as ours in (II) using the stronger hypothesis P tt NP[2]⊆PNP[1]. Our results may also be viewed as steps towards obtaining solutions to arguably the most general form of the 1-versus-2 queries problem: For any k≥1, whether $\mathrm{P}^{\Sigma^{p}_{k}[2]}_{tt}$can be simulated in $\mathrm{BPP}^{\Sigma^{p}_{k}[1]}$.
Faster extraction of high-level minimal unsatisfiable cores Various verification techniques are based on SAT's capability to identify a small, or even minimal, unsatisfiable core in case the formula is unsatisfiable, i.e., a small subset of the clauses that are unsatisfiable regardless of the rest of the formula. In most cases it is not the core itself that is being used, rather it is processed further in order to check which clauses from a preknown set of Interesting Constraints (where each constraint is modeled with a conjunction of clauses) participate in the proof. The problem of minimizing the participation of interesting constraints was recently coined high-level minimal unsatisfiable core by Nadel [15]. Two prominent examples of verification techniques that need such small cores are 1) abstraction-refinement model-checking techniques, which use the core in order to identify the state variables that will be used for refinement (smaller number of such variables in the core implies that more state variables can be replaced with free inputs in the abstract model), and 2) assumption minimization, where the goal is to minimize the usage of environment assumptions in the proof, because these assumptions have to be proved separately. We propose seven improvements to the recent solution given in [15], which together result in an overall reduction of 55% in run time and 73% in the size of the resulting core, based on our experiments with hundreds of industrial test cases. The optimized procedure is also better empirically than the assumptions-based minimization technique.
Formal Characterizations of Active Databases: Part II This paper presents a formal framework for specifying active database systems. Declarative characterizationof active databases allows additional flexibility in defining an implementation--independentsemantics of the active rules. By making a clear distinction between actual and hypothetical executionof the actions, one can make claims and about the (possible) effects of an actions" sequence and provethem, without actually executing it. The results that we present extend the active...
Planning for contingencies: a decision-based approach A fundamental assumption made by classical AI planners is that there is no uncertainty in the world: the planner has full knowledge of the conditions under which the plan will be executed and the outcome of every action is fully predictable. These planners cannot therefore construct contingency plans, i.e., plans in which diffierent actions are performed in diffierent circumstances. In this paper we discuss some issues that arise in the representation and construction of contingency plans and describe Cassandra, a partial-order contingency planner. Cassandra uses explicit decision-steps that enable the agent executing the plan to decide which plan branch to follow. The decision-steps in a plan result in subgoals to acquire knowledge, which are planned for in the same way as any other subgoals. Cassandra thus distinguishes the process of gathering information from the process of making decisions. The explicit representation of decisions in Cassandra allows a coherent approach to the problems of contingent planning, and provides a solid base for extensions such as the use of diffierent decision-making procedures.
Selective versioning in a secure disk system Making vital disk data recoverable even in the event of OS compromises has become a necessity, in view of the increased prevalence of OS vulnerability exploits over the recent years. We present the design and implementation of a secure disk system, SVSDS, that performs selective, flexible, and transparent versioning of stored data, at the disk-level. In addition to versioning, SVSDS actively enforces constraints to protect executables and system log files. Most existing versioning solutions that operate at the disk-level are unaware of the higher-level abstractions of data, and hence are not customizable. We evolve a hybrid solution that combines the advantages of disk-level and file-system--level versioning systems thereby ensuring security, while at the same time allowing flexible policies. We implemented and evaluated a software-level prototype of SVSDS in the Linux kernel and it shows that the space and performance overheads associated with selective versioning at the disk level are minimal.
A survey of flow cytometry data analysis methods. Flow cytometry (FCM) is widely used in health research and in treatment for a variety of tasks, such as in the diagnosis and monitoring of leukemia and lymphoma patients, providing the counts of helper-T lymphocytes needed to monitor the course and treatment of HIV infection, the evaluation of peripheral blood hematopoietic stem cell grafts, and many other diseases. In practice, FCM data analysis is performed manually, a process that requires an inordinate amount of time and is error-prone, nonreproducible, nonstandardized, and not open for re-evaluation, making it the most limiting aspect of this technology. This paper reviews state-of-the-art FCM data analysis approaches using a framework introduced to report each of the components in a data analysis pipeline. Current challenges and possible future directions in developing fully automated FCM data analysis tools are also outlined.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.008356
0.014266
0.009594
0.006065
0.005389
0.002889
0.001437
0.000171
0.000046
0
0
0
0
0
Designing Dependable Storage Solutions for Shared Application Environments The costs of data loss and unavailability can be large, so businesses use many data protection techniques such as remote mirroring, snapshots, and backups to guard against failures. Choosing an appropriate combination of techniques is difficult because there are numerous approaches for protecting data and allocating resources. Storage system architects typically use ad hoc techniques, often resulting in overengineered expensive solutions or underprovisioned inadequate ones. In contrast, this paper presents a principled automated approach for designing dependable storage solutions for multiple applications in shared environments. Our contributions include search heuristics for intelligent exploration of the large design space and modeling techniques for capturing interactions between applications during recovery. Using realistic storage system requirements, we show that our design tool produces designs that cost up to two times less in initial outlays and expected data penalties than the designs produced by an emulated human design process. Additionally, we compare our design tool to a random search heuristic and a genetic algorithm metaheuristic, and show that our approach consistently produces better designs for the cases we have studied. Finally, we study the sensitivity of our design tool to several input parameters.
Dynamically Quantifying and Improving the Reliability of Distributed Storage Systems In this paper, we argue that the reliability of large-scale storage systems can be significantly improved by using better reliability metrics and more efficient policies for recovering from hardware failures. Specifically, we make three main contributions. First, we introduce NDS (Normalcy Deviation Score), a new metric for dynamically quantifying the reliability status of a storage system. Second, we propose MinI (Minimum Intersection), a novel recovery scheduling policy that improves reliability by efficiently reconstructing data after a hardware failure. MinI uses NDS to tradeoff reliability and performance in making its scheduling decisions. Third, we evaluate NDS and MinI for three common data-allocation schemes and a number of different parameters. Our evaluation focuses on a distributed storage system based on erasure codes. We find that MinI improves reliability significantly, as compared to conventional policies.
On the road to recovery: restoring data after disasters Restoring data operations after a disaster is a daunting task: how should recovery be performed to minimize data loss and application downtime? Administrators are under considerable pressure to recover quickly, so they lack time to make good scheduling decisions. They schedule recovery based on rules of thumb, or on pre-determined orders that might not be best for the failure occurrence. With multiple workloads and recovery techniques, the number of possibilities is large, so the decision process is not trivial.This paper makes several contributions to the area of data recovery scheduling. First, we formalize the description of potential recovery processes by defining recovery graphs. Recovery graphs explicitly capture alternative approaches for recovering workloads, including their recovery tasks, operational states, timing information and precedence relationships. Second, we formulate the data recovery scheduling problem as an optimization problem, where the goal is to find the schedule that minimizes the financial penalties due to downtime, data loss and vulnerability to subsequent failures. Third, we present several methods for finding optimal or near-optimal solutions, including priority-based, randomized and genetic algorithm-guided ad hoc heuristics. We quantitatively evaluate these methods using realistic storage system designs and workloads, and compare the quality of the algorithms' solutions to optimal solutions provided by a math programming formulation and to the solutions from a simple heuristic that emulates the choices made by human administrators. We find that our heuristics' solutions improve on the administrator heuristic's solutions, often approaching or achieving optimality.
A framework for evaluating storage system dependability Designing storage systems to provide business continuity in the face of failures requires the use of various data protection techniques, such as backup, remote mirroring, point-in-time copies and vaulting, often in concert. Predicting the dependability provided by such compositions of techniques is difficult, yet necessary for dependable system design. We present a framework for evaluating the dependability of data storage systems, including both individual data protection techniques and their compositions. Our models estimate storage system recovery time, data loss, normal mode system utilization and operational costs under a variety of failure scenarios. We demonstrate the effectiveness of these modeling techniques through a case study using real-world storage system designs and workloads.
A fresh look at the reliability of long-term digital storage Emerging Web services, such as email, photo sharing, and web site archives, must preserve large volumes of quickly accessible data indefinitely into the future. The costs of doing so often determine whether the service is economically viable. We make the case that these applications' demands on large scale storage systems over long time horizons require us to reevaluate traditional system designs. We examine threats to long-lived data from an end-to-end perspective, taking into account not just hardware and software faults but also faults due to humans and organizations. We present a simple model of long-term storage failures that helps us reason about various strategies for addressing some of these threats. Using this model we show that the most important strategies for increasing the reliability of long-term storage are detecting latent faults quickly, automating fault repair to make it cheaper and faster, and increasing the independence of data replicas.
The Mini and Micro Industries First Page of the Article
EVENODD: an efficient scheme for tolerating double disk failures in RAID architectures We present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (i.e., two extra disks) is based on Reed-Solomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error
Data allocation for multidisk databases This paper deals with I/O throughput maximization in a single processor/multidisk database system, by means of optimal allocation of entire relations (or other nonfragmented data objects) to disks. We examine the cases in which such allocation is beneficial, and present a mathematical formulation of the problem. This formulation is shown to be flexible enough to accommodate various objectives and constraints of a typical system design.
A continuum of disk scheduling algorithms A continuum of disk scheduling algorithms, V(R), having endpoints V(0) = SSTF and V(1) = SCAN, is defined. V(R) maintains a current SCAN direction (in or out) and services next the request with the smallest effective distance. The effective distance of a request that lies in the current direction is its physical distance (in cylinders) from the read/write head. The effective distance of a request in the opposite direction is its physical distance plus R x (total number of cylinders on the disk). By use of simulation methods, it is shown that this definitional continuum also provides a continuum in performance, both with respect to the mean and with respect to the standard deviation of request waiting time. For objective functions that are linear combinations of the two measures, &mgr;w + kow, intermediate points of the continuum are seen to provide performance uniformly superior to both SSTF and SCAN. A method of implementing V(R) and the results of its experimental use in a real system are presented.
The LRU-K page replacement algorithm for database disk buffering This paper introduces a new approach to database disk buffering, called the LRU-K method. The basic idea of LRU-K is to keep track of the times of the last K references to popular database pages, using this information to statistically estimate the interarrival times of references on a page by page basis. Although the LRU-K approach performs optimal statistical inference under relatively standard assumptions, it is fairly simple and incurs little bookkeeping overhead. As we demonstrate with simulation experiments, the LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages. In fact, LRU-K can approach the behavior of buffering algorithms in which page sets with known access frequencies are manually assigned to different buffer pools of specifically tuned sizes. Unlike such customized buffering algorithms however, the LRU-K method is self-tuning, and does not rely on external hints about workload characteristics. Furthermore, the LRU-K algorithm adapts in real time to changing patterns of access.
The complexity of promise problems with applications to public-key cryptography
A Correctness Result for Reasoning about One-Dimensional Planning Problems A plan with rich control structures like branches and loops can usually serve as a general solution that solves multiple planning instances in a domain. However, the correctness of such generalized plans is non-trivial to define and verify, especially when it comes to whether or not a plan works for all of the infinitely many instances of the problem. In this paper, we give a precise definition of a generalized plan representation called an FSA plan, with its semantics defined in the situation calculus. Based on this, we identify a class of infinite planning problems, which we call one-dimensional (1d), and prove a correctness result that 1d problems can be verified by finite means. We show that this theoretical result leads to an algorithm that does this verification practically, and a planner based on this verification algorithm efficiently generates provably correct plans for 1d problems.
Global reinforcement learning in neural networks. In this letter, we have found a more general formulation of the REward Increment = Nonnegative Factor x Offset Reinforcement x Characteristic Eligibility (REINFORCE) learning principle first suggested by Williams. The new formulation has enabled us to apply the principle to global reinforcement learning in networks with various sources of randomness, and to suggest several simple local rules for such networks. Numerical simulations have shown that for simple classification and reinforcement learning tasks, at least one family of the new learning rules gives results comparable to those provided by the famous Rules A(r-i) and A(r-p) for the Boltzmann machines.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.2496
0.2496
0.1496
0.0396
0.013138
0.000666
0.000159
0.000035
0.000006
0
0
0
0
0
Bi-Transferring Deep Neural Networks For Domain Adaptation Sentiment classification aims to automatically predict sentiment polarity (e.g., positive or negative) of user generated sentiment data (e.g., reviews, blogs). Due to the mismatch among different domains, a sentiment classifier trained in one domain may not work well when directly applied to other domains. Thus, domain adaptation for sentiment classification algorithms are highly desirable to reduce the domain discrepancy and manual labeling costs. To address the above challenge, we propose a novel domain adaptation method, called Bi-Transferring Deep Neural Networks (BTDNNs). The proposed BTDNNs attempts to transfer the source domain examples to the target domain, and also transfer the target domain examples to the source domain. The linear transformation of BTDNNs ensures the feasibility of transferring between domains, and the distribution consistency between the transferred domain and the desirable domain is constrained with a linear data reconstruction manner. As a result, the transferred source domain is supervised and follows similar distribution as the target domain. Therefore, any supervised method can be used on the transferred source domain to train a classifier for sentiment classification in a target domain. We conduct experiments on a benchmark composed of reviews of 4 types of Amazon products. Experimental results show that our proposed approach significantly outperforms the several baseline methods, and achieves an accuracy which is competitive with the state-of-the-art method for domain adaptation.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On odd and even cycles in normal logic programs An odd cycle of a logic program is a simple cycle that has an odd number of negative edges in the dependency graph of the program. Similarly, an even cycle is one that has an even number of negative edges. For a normal logic program that has no odd cycles, while it is known that such a program always has a stable model, and such a stable model can be computed in polynomial time, we show in this paper that checking whether an atom is in a stable model is NP-complete, and checking whether an atom is in all stable models is co-NP complete, both are the same as in the general case for normal logic programs. Furthermore, we show that if a normal logic program has exactly one odd cycle, then checking whether it has a stable model is NP-complete, again the same as in the general case. For normal logic programs with a fixed number of even cycles, we show that there is a polynomial time algorithm for computing all stable models. Furthermore, this polynomial time algorithm can be improved significantly if the number of odd cycles is also fixed.
Backdoors to satisfaction A backdoor set is a set of variables of a propositional formula such that fixing the truth values of the variables in the backdoor set moves the formula into some polynomial-time decidable class. If we know a small backdoor set we can reduce the question of whether the given formula is satisfiable to the same question for one or several easy formulas that belong to the tractable class under consideration. In this survey we review parameterized complexity results for problems that arise in the context of backdoor sets, such as the problem of finding a backdoor set of size at most k, parameterized by k. We also discuss recent results on backdoor sets for problems that are beyond NP.
Complexity in Value-Based Argument Systems We consider a number of decision problems formulated in value-based argumentation frameworks (VAFs), a development of Dung's argument systems in which arguments have associated abstract values which are considered relative to the orderings induced by the opinions of specific audiences. In the context of a single fixed audience, it is known that those decision questions which are typically computationally hard in the standard setting admit efficient solution methods in the value-based setting. In this paper we show that, in spite of this positive property, there still remain a number of natural questions that arise solely in value-based schemes for which there are unlikely to be efficient decision processes.
Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in SAT-Based Planning In Verification and in (optimal) AI Planning, a successful method is to formulate the application as boolean satisfiability ( SAT), and solve it with state-of-the-art DPLL-based procedures. There is a lack of understanding of why this works so well. Focussing on the Planning context, we identify a form of problem structure concerned with the symmetrical or asymmetrical nature of the cost of achieving the individual planning goals. We quantify this sort of structure with a simple numeric parameter called AsymRatio, ranging between 0 and 1. We run experiments in 10 benchmark domains from the International Planning Competitions since 2000; we show that AsymRatio is a good indicator of SAT solver performance in 8 of these domains. We then examine carefully crafted synthetic planning domains that allow control of the amount of structure, and that are clean enough for a rigorous analysis of the combinatorial search space. The domains are parameterized by size, and by the amount of structure. The CNFs we examine are unsatisfiable, encoding one planning step less than the length of the optimal plan. We prove upper and lower bounds on the size of the best possible DPLL refutations, under different settings of the amount of structure, as a function of size. We also identify the best possible sets of branching variables (backdoors). With minimum AsymRatio, we prove exponential lower bounds, and identify minimal backdoors of size linear in the number of variables. With maximum AsymRatio, we identify logarithmic DPLL refutations ( and backdoors), showing a doubly exponential gap between the two structural extreme cases. The reasons for this behavior - the proof arguments - illuminate the prototypical patterns of structure causing the empirical behavior observed in the competition benchmarks.
Logic programs with stable model semantics as a constraint programming paradigm Logic programming with the stable model semantics is put forward as a novel constraint programming paradigm. This paradigm is interesting because it bring advantages of logic programming based knowledge representation techniques to constraint programming and because implementation methods for the stable model semantics for ground (variable&dash;free) programs have advanced significantly in recent years. For a program with variables these methods need a grounding procedure for generating a variable&dash;free program. As a practical approach to handling the grounding problem a subclass of logic programs, domain restricted programs, is proposed. This subclass enables efficient grounding procedures and serves as a basis for integrating built&dash;in predicates and functions often needed in applications. It is shown that the novel paradigm embeds classical logical satisfiability and standard (finite domain) constraint satisfaction problems but seems to provide a more expressive framework from a knowledge representation point of view. The first steps towards a programming methodology for the new paradigm are taken by presenting solutions to standard constraint satisfaction problems, combinatorial graph problems and planning problems. An efficient implementation of the paradigm based on domain restricted programs has been developed. This is an extension of a previous implementation of the stable model semantics, the Smodels system, and is publicly available. It contains, e.g., built&dash;in integer arithmetic integrated to stable model computation. The implementation is described briefly and some test results illustrating the current level of performance are reported.
Algorithms for propositional model counting We present algorithms for the propositional model counting problem #SAT. The algorithms utilize tree decompositions of certain graphs associated with the given CNF formula; in particular we consider primal, dual, and incidence graphs. We describe the algorithms coherently for a direct comparison and with sufficient detail for making an actual implementation reasonably easy. We discuss several aspects of the algorithms including worst-case time and space requirements.
A sufficient condition for backtrack-bounded search Backtrack search is often used to solve constraint satisfaction problems. A relationship involving the structure of the constraints is described that provides a bound on the backtracking required to advance deeper into the backtrack tree. This analysis leads to upper bounds on the effort required for solution of a class of constraint satisfaction problems. The solutions involve a combination of relaxation preprocessing and backtrack search. The bounds are expressed in terms of the structure of the constraint connections. Specifically, the effort is shown to have a bound exponential in the size of the largest biconnected component of the constraint graph, as opposed to the size of the graph as a whole.
Extremal problems in logic programming and stable model computation We study the following problem: given a class of logic programs ¢, determine the maximum number of stable models of a program from ©. We establish the maximum for the class of all logic programs with at most n clauses, and for the class of all logic programs of size at most n. We also characterize the programs for which the maxima are attained. We obtained similar results for the class of all disjunctive logic programs with at most n clauses, each of length at most m, and for the class of all disjunctive logic programs of size at most n. Our results on logic programs have direct implication for the design of algorithms to compute stable models. Several such algorithms, similar in spirit to the Davis-Putnam procedure, are described in the paper. Our results imply that there is an algorithm that finds all stable models of a program with n clauses after considering the search space of size O(3n/3) in the worst case. Our results also provide some insights into the question of representability of families of sets as families of stable models of logic programs.
Blocks World revisited Contemporary AI shows a healthy trend away from artificial problems towards real-world applications. Less healthy, however, is the fashionable disparagement of “toy” domains: when properly approached, these domains can at the very least support meaningful systematic experiments, and allow features relevant to many kinds of reasoning to be abstracted and studied. A major reason why they have fallen into disrepute is that superficial understanding of them has resulted in poor experimental methodology and consequent failure to extract useful information. This paper presents a sustained investigation of one such toy: the (in)famous Blocks World planning problem, and provides the level of understanding required for its effective use as a benchmark. Our results include methods for generating random problems for systematic experimentation, the best domain-specific planning algorithms against which AI planners can be compared, and observations establishing the average plan quality of near-optimal methods. We also study the distribution of hard/easy instances, and identify the structure that AI planners must be able to exploit in order to approach Blocks World successfully.
Disk Shadowing Disk shadowing is a technique for maintaining a set of two or more identical disk images on separate disk devices. Its primary purpose is to enhance reliability and availability of secondary storage by providing multiple paths to redundant data. However, shadowing can also boost I/O performance. In this paper, we contend that intelligent device scheduling of shadowed discs increases the I/O rate by allowing parallel reads and by substantially reducing the average seek time for random reads. In particular, we develop and analytic model which shows that the seek time for a random read in a shadow set is a monotonic decreasing function of the number of disks.
Causality and the Qualification Problem In formal theories for reasoning about actions, the qualification problem denotes the problem to account for the many conditions which, albeit being unlikely to occur, may prevent the successful execution of an action. By a simple counter-example in the spirit of the well-known Yale Shooting scenario, we show that the common straightforward approach of globally minimizing such abnormal disqualifications is inadequate as it lacks an appropriate notion of causality. To overcome this difficulty, we propose to incorporate causality by treating the proposition that an action is qualified as a fluent which is initially assumed away by default but otherwise potentially indirectly affected by the execution of actions. Our formal account of the qualification problem includes the proliferation of explanations for surprising disqualifications and also accommodates so-called miraculous disqualifications. We moreover sketch a version of the fluent calculus which involves default rules to address abnormal disqualifications of actions, and which is provably correct wrt. our formal characterization of the qualification problem.
Logical Preference Representation and Combinatorial Vote We introduce the notion of combinatorial vote, where a group of agents (or voters) is supposed to express preferences and come to a common decision concerning a set of non-independent variables to assign. We study two key issues pertaining to combinatorial vote, namely preference representation and the automated choice of an optimal decision. For each of these issues, we briefly review the state of the art, we try to define the main problems to be solved and identify their computational complexity.
A Realistic Experiment in Knowledge Representation in Open Event Calculus: Protocol Specification This paper presents one of the first realistic experiments in the use of Event Calculusin Open Logic Programming: the specification of a process protocol. The specification taskinvolves most of the common complications of temporal reasoning: the representation of contextdependent actions, of preconditions and ramifications of actions, the modelling of systemfaults, and most of all, the representation of uncertainty of actions. As the underlying language,the Open Logic Programming...
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.1111
0.0611
0.0111
0.004257
0.002307
0.001194
0.000307
0.000051
0.000003
0
0
0
0
0
Who Said We Need to Relax All Variables?
Red-Black Relaxed Plan Heuristics.
Relaxation of Temporal Planning Problems Relaxation is ubiquitous in the practical resolution of combinatorial problems. If a valid relaxation of an instance has no solution then the original instance has no solution. A tractable relaxation can be built and solved in polynomial time. The most obvious application is the efficient detection of certain unsolvable instances. We review existing relaxation techniques in temporal planning and propose an alternative relaxation inspired by a tractable class of temporal planning problems. Our approach is orthogonal to relaxations based on the ignore-all-deletes approach used in non-temporal planning. We show that our relaxation can even be applied to non-temporal problems, and can also be used to extend a tractable class of temporal planning problems.
Red-Black Relaxed Plan Heuristics Reloaded.
Monotone temporal planning: tractability, extensions and applications This paper describes a polynomially-solvable class of temporal planning problems. Polynomiality follows from two assumptions. Firstly, by supposing that each sub-goal fluent can be established by at most one action, we can quickly determine which actions are necessary in any plan. Secondly, the monotonicity of sub-goal fluents allows us to express planning as an instance of STP≠ (Simple Temporal Problem with difference constraints). This class includes temporally-expressive problems requiring the concurrent execution of actions, with potential applications in the chemical, pharmaceutical and construction industries. We also show that any (temporal) planning problem has a monotone relaxation which can lead to the polynomial-time detection of its unsolvability in certain cases. Indeed we show that our relaxation is orthogonal to relaxations based on the ignore-deletes approach used in classical planning since it preserves deletes and can also exploit temporal information.
Causal graphs and structurally restricted planning The causal graph is a directed graph that describes the variable dependencies present in a planning instance. A number of papers have studied the causal graph in both practical and theoretical settings. In this work, we systematically study the complexity of planning restricted by the causal graph. In particular, any set of causal graphs gives rise to a subcase of the planning problem. We give a complete classification theorem on causal graphs, showing that a set of graphs is either polynomial-time tractable, or is not polynomial-time tractable unless an established complexity-theoretic assumption fails; our theorem describes which graph sets correspond to each of the two cases. We also give a classification theorem for the case of reversible planning, and discuss the general direction of structurally restricted planning.
The FF planning system: fast plan generation through heuristic search We describe and evaluate the algorithmic techniques that are used in the FF planning system. Like the HSP system, FF relies on forward state space search, using a heuristic that estimates goal distances by ignoring delete lists. Unlike HSP's heuristic, our method does not assume facts to be independent. We introduce a novel search strategy that combines hill-climbing with systematic search, and we show how other powerful heuristic information can be extracted and used to prune the search space. FF was the most successful automatic planner at the recent AIPS-2000 planning competition. We review the results of the competition, give data for other benchmark domains, and investigate the reasons for the runtime performance of FF compared to HSP.
Fusing procedural and declarative planning goals for nondeterministic domains While in most planning approaches goals and plans are different objects, it is often useful to specify goals that combine declarative conditions with procedural plans. In this paper, we propose a novel language for expressing temporally extended goals for planning in nondeterministic domains. The key feature of this language is that it allows for an arbitrary combination of declarative goals expressed in temporal logic and procedural goals expressed as plan fragments. We provide a formal definition of the language and its semantics, and we propose an approach to planning with this language in nondeterministic domains. We implement the planning framework and perform a set of experimental evaluations that show the potentialities of our approach.
Towards a general theory of action and time A formalism for reasoning about actions is proposed that is based on a temporal logic. It allows a much wider range of actions to be described than with previous approaches such as the situation calculus. This formalism is then used to characterize the different types of events, processes, actions, and properties that can be described in simple English sentences. In addressing this problem, we consider actions that involve non-activity as well as actions that can only be defined in terms of the beliefs and intentions of the actors. Finally, a framework for planning in a dynamic world with external events and multiple agents is suggested.
Pushing Goal Derivation in DLP Computations dlv is a knowledge representation system, based on disjunctive logic programming, which offers front-ends to several advanced KR formalisms. This paper describes new techniques for the computation of answer sets of disjunctive logic programs, that have been developed and implemented in the dlv system. These techniques try to "push" the query goals in the process of model generation (query goals are often present either explicitly, like in planning and diagnosis, or implicitly in the form of integrity constraints). This way, a lot of useless models are discarded "a priori" and the computation converges rapidly toward the generation of the "right" answer set. A few preliminary benchmarks show dramatic efficiency gains due to the new techniques.
What Size Net Gives Valid Generalization? We address the question of when a network can be expected to generalize from m random training examples chosen from some arbitrary probability distribution, assuming that future test examples are drawn from the same distribution. Among our results are the following bounds on appropriate sample vs. network size. Assume 0 < ∊ ≤ 1/8. We show that if m ≥ O(W/∊ log N/∊) random examples can be loaded on...
Automatic generation of help from interface design models Model-based interface design can save substantial effort in building help systems for interactive applications by generating help automatically from the model used to implement the interface, and by providing a framework for developers to easily refine the automatically-generated help texts. This paper describes a system that generates hypertext-based help about data presented in application displays, commands to manipulate data, and interaction techniques to invoke commands. The refinement component provides several levels of customization , including programming-by-example techniques to let developers edit directly help windows that the system produces, and the possibility to refine help generation rules.
Intention reconsideration in complex environments One of the key problems in the design of belief-desire-intention (bdi) agents is that of nding an appropriate policy for intention reconsideration. In previous work, Kinny and George investigated the eectiveness of several such reconsideration policies, and demonstrated that in general, there is no one best approach { dierent environments demand dierent intention reconsideration strategies. In this paper, we further investigate the relationship between the eectiveness of an agent and its...
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.033287
0.027602
0.027595
0.011333
0.009198
0.004124
0.001855
0.000312
0.000064
0.000002
0
0
0
0
New techniques for ensuring the long term integrity of digital archives A large portion of the government, business, cultural, and scientific digital data being created today needs to be archived and preserved for future use of periods ranging from a few years to decades and sometimes centuries. A fundamental requirement of a long term archive is to ensure the integrity of its holdings. In this paper, we develop a new methodology to address the integrity of long term archives using rigorous cryptographic techniques. Our approach involves the generation of a small-size integrity token for each digital object to be archived, and some cryptographic summary information based on all the objects handled within a dynamic time period. We present a framework that enables the continuous auditing of the holdings of the archive depending on the policy set by the archive. Moreover, an independent auditor will be able to verify the integrity of every version of an archived digital object as well as link the current version to the original form of the object when it was ingested into the archive. We built a prototype system that is completely independent of the archive's underlying architecture, and tested it on large scale data. We include in this paper some preliminary results on the validation and performance of our prototype.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Architecture Design of a Data Intensive Satellite Image Processing and Distribution System High-speed and cost-effective architecture design has become a great challenge for future dataintensive scalable high performance computing (HPC) systems. In this paper, we discuss the significant architecture design challenges of a large scale satellite images processing and distribution system maintained by the U.S. Geological Survey (USGS) Earth Resources Observation and Science Center (EROS). Specifically, we conduct a real world case study on 1) how can changing workload greatly affect existing system architecture; 2) how does EROS's new system architecture adapt to the significant workload changes; and 3) how to further optimize the performance of the current EROS system.
Performance Evaluation of Traditional Caching Policies on a Large System with Petabytes of Data Caching is widely known to be an effective method for improving I/O performance by storing frequently used data on higher speed storage components. However, most existing studies that focus on caching performance evaluate fairly small files populating a relatively small cache. Few reports are available that detail the performance of traditional cache replacement policies on extremely large caches. Do such traditional caching policies still work effectively when applied to systems with petabytes of data? In this paper, we comprehensively evaluate the performance of several cache policies, which include First-In-First-Out (FIFO), Least Recently Used (LRU) and Least Frequently Used (LFU), on the global satellite imagery distribution application maintained by the U.S. Geological Survey (USGS) Earth Resources Observation and Science Center (EROS). Evidence is presented suggesting traditional caching policies are capable of providing performance gains when applied to large data sets as with smaller data sets. Our evaluation is based on approximately three million real-world satellite images download requests representing global user download behavior since October 2008.
Practical buffer cache management scheme based on simple prefetching Many replacement and prefetching policies have recently been proposed for buffer cache management. However, many real operating systems, including GNU/Linux, generally use the simple least recently used (LRU) replacement policy with prefetching being employed in special situations such as when sequentiality is detected. In this paper, we propose the SA-W2R scheme that integrates buffer management and prefetching, where prefetching is done constantly in aggressive fashion. The scheme is simple to implement making it a feasible solution in real systems. In its basic form, for buffer replacement, it uses the LRU policy. However, its modular design allows for any replacement policy to be incorporated into the scheme. For prefetching, it uses the LRU-one block lookahead (LRU-OBL) approach, eliminating any extra burden that is generally necessary in other prefetching approaches. Implementation studies based on the GNU/Linux show that the SA-W2 R performs better than the GNU/Linux with a maximum increases of 23% for the workloads considered
Dynamo: amazon's highly available key-value store Reliability at massive scale is one of the biggest challenges we face at Amazon.com, one of the largest e-commerce operations in the world; even the slightest outage has significant financial consequences and impacts customer trust. The Amazon.com platform, which provides services for many web sites worldwide, is implemented on top of an infrastructure of tens of thousands of servers and network components located in many datacenters around the world. At this scale, small and large components fail continuously and the way persistent state is managed in the face of these failures drives the reliability and scalability of the software systems. This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon's core services use to provide an "always-on" experience. To achieve this level of availability, Dynamo sacrifices consistency under certain failure scenarios. It makes extensive use of object versioning and application-assisted conflict resolution in a manner that provides a novel interface for developers to use.
ARC: A Self-Tuning, Low Overhead Replacement Cache We consider the problem of cache management in a demand paging scenario with uniform page sizes. We propose a new cache management policy, namely, Adaptive Replacement Cache (ARC), that has several advantages.In response to evolving and changing access patterns, ARC dynamically, adaptively, and continually balances between the recency and frequency components in an online and self-tuning fashion. The policy ARC uses a learning rule to adaptively and continually revise its assumptions about the workload.The policy ARC is empirically universal, that is, it empirically performs as well as a certain fixed replacement policy-even when the latter uses the best workload-specific tuning parameter that was selected in an offline fashion. Consequently, ARC works uniformly well across varied workloads and cache sizes without any need for workload specific a priori knowledge or tuning. Various policies such as LRU-2, 2Q, LRFU, and LIRS require user-defined parameters, and, unfortunately, no single choice works uniformly well across different workloads and cache sizes.The policy ARC is simple-to-implement and, like LRU, has constant complexity per request. In comparison, policies LRU-2 and LRFU both require logarithmic time complexity in the cache size.The policy ARC is scan-resistant: it allows one-time se-quential requests to pass through without polluting the cache.On 23 real-life traces drawn from numerous domains, ARC leads to substantial performance gains over LRU for a wide range of cache sizes. For example, for a SPC1 like synthetic benchmark, at 4GB cache, LRU delivers a hit ratio of 9.19% while ARC achieves a hit ratio of 20%.
The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz.
Affinity analysis of coded data sets Coded data sets are commonly used as compact representations of real world processes. Such data sets have been studied within various research fields from association mining, data warehousing, knowledge discovery, collaborative filtering to machine learning. However, previous studies on coded data sets have introduced methods for the analysis of rather small data sets. This study proposes applying information retrieval for enabling high performance analysis of data masses that scale beyond traditional approaches. Part of this PHD study focuses on new type of kernel projection functions that can be used to find similarities in spare discrete data spaces. This study presents experimental results how information retrieval indexes scale and outperform two common relational data schemas with a leading commercial DBMS for market basket analysis.
Internet of Things (IoT): A vision, architectural elements, and future directions Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
The complexity of combinatorial problems with succinct input representation Several languages for the succinct representation of the instances of combinatorial problems are investigated. These languages have been introduced in [20, 2] and [5] where it has been shown that describing the instances by these languages causes a blow-up of the complexities of some problems. In the present paper the descriptional power of these languages is compared by estimating the complexities of some combinatorial problems in terms of completeness in suitable classes of the “counting polynomial-time hierarchy” which is introduced here. It turns out that some of the languages are not comparable, unless P=NP Some problems left open in [2] are solved.
Recognizing frozen variables in constraint satisfaction problems In constraint satisfaction problems over finite domains, some variables can be frozen, that is, they take the same value in all possible solutions. We study the complexity of the problem of recognizing frozen variables with restricted sets of constraint relations allowed in the instances. We show that the complexity of such problems is determined by certain algebraic properties of these relations. Under the assumption that NP ≠ coNP (and consequently PTIME ≠ NP), we characterize all tractable problems, and describe large classes of NP-complete, coNP-complete, and DP-complete problems. As an application of these results, we completely classify the complexity of the problem in two cases: (1) with domain size 2; and (2) when all unary relations are present. We also give a rough classification for domain size 3.
Simultaneous Pipelining in QPipe: Exploiting Work Sharing Opportunities Across Queries Data warehousing and scientific database applications operate on massive datasets and are characterized by complex queries accessing large portions of the database. Concurrent queries often exhibit high data and computation overlap, e.g., they access the same relations on disk, compute similar aggregates, or share intermediate results. Unfortunately, run-time sharing in modern database engines is limited by the paradigm of invoking an independent set of operator instances per query, potentially missing sharing opportunities if the buffer pool evicts data early.
A cost-benefit scheme for high performance predictive prefetching
Scheduling parallel I/O operations The I/O bottleneck in parallel computer systems has recently begun receiving increasing interest. Most attention has focused on improving the performance of I/O devices using fairly low-level parallelism in techniques such as disk striping and interleaving. Widely applicable solutions, however, will require an integrated approach which addresses the problem at multiple system levels, including applications, systems software, and architecture. We propose that within the context of such an integrated approach, scheduling parallel I/O operations will become increasingly attractive and can potentially provide substantial performance benefits.We describe a simple I/O scheduling problem and present approximate algorithms for its solution. The costs of using these algorithms in terms of execution time, and the benefits in terms of reduced time to complete a batch of I/O operations, are compared with the situations in which no scheduling is used, and in which an optimal scheduling algorithm is used. The comparison is performed both theoretically and experimentally. We have found that, in exchange for a small execution time overhead, the approximate scheduling algorithms can provide substantial improvements in I/O completion times.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.066667
0.05
0.007692
0.001905
0
0
0
0
0
0
0
0
0
Plan Synthesis for Knowledge and Action Bases. We study plan synthesis for a variant of Knowledge and Action Bases (KABs), a rich, dynamic framework, where states are description logic (DL) knowledge bases (KBs) whose extensional part is manipulated by actions that possibly introduce new objects from an infinite domain. We show that plan existence over KABs is undecidable even under severe restrictions. We then focus on state-bounded KABs, a class for which plan existence is decidable, and provide sound and complete plan synthesis algorithms, which combine techniques based on standard planning, DL query answering, and finite-state abstraction. All results hold for any DL with decidable query answering. We finally show that for lightweight DLs, plan synthesis can be compiled into standard ADL planning.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Identification of rice diseases using deep convolutional neural networks. The automatic identification and diagnosis of rice diseases are highly desired in the field of agricultural information. Deep learning is a hot research topic in pattern recognition and machine learning at present, it can effectively solve these problems in vegetable pathology. In this study, we propose a novel rice diseases identification method based on deep convolutional neural networks (CNNs) techniques. Using a dataset of 500 natural images of diseased and healthy rice leaves and stems captured from rice experimental field, CNNs are trained to identify 10 common rice diseases. Under the 10-fold cross-validation strategy, the proposed CNNs-based model achieves an accuracy of 95.48%. This accuracy is much higher than conventional machine learning model. The simulation results for the identification of rice diseases show the feasibility and effectiveness of the proposed method.
Growing random forest on deep convolutional neural networks for scene categorization. Random forests are grown on convolutional neural networks for scene categorization.Features from multi-layers of deep convolutional neural networks are utilized.A feature selection method is proposed to use random forests to categorize scenes. Breakthrough performances have been achieved in computer vision by utilizing deep neural networks. In this paper we propose to use random forest to classify image representations obtained by concatenating multiple layers of learned features of deep convolutional neural networks for scene classification. Specifically, we first use deep convolutional neural networks pre-trained on the large-scale image database Places to extract features from scene images. Then, we concatenate multiple layers of features of the deep neural networks as image representations. After that, we use random forest as the classifier for scene classification. Moreover, to reduce feature redundancy in image representations we derived a novel feature selection method for selecting features that are suitable for random forest classification. Extensive experiments are conducted on two benchmark datasets, i.e. MIT-Indoor and UIUC-Sports. Obtained results demonstrated the effectiveness of the proposed method. The contributions of the paper are as follows. First, by extracting multiple layers of deep neural networks, we can explore more information of image contents for determining their categories. Second, we proposed a novel feature selection method that can be used to reduce redundancy in features obtained by deep neural networks for classification based on random forest. In particular, since deep learning methods can be used to augment expert systems by having the systems essentially training themselves, and the proposed framework is general, which can be easily extended to other intelligent systems that utilize deep learning methods, the proposed method provide a potential way for improving performances of other expert and intelligent systems.
Object Recognition from Local Scale-Invariant Features An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection.These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales.The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds.
Histograms of Oriented Gradients for Human Detection We study the question of feature sets for robust visual object recognition, adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of Histograms of Oriented Gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
Logic programs with classical negation
Logic programming and knowledge representation In this paper, we review recent work aimed at the application of declarative logic programming to knowledge representation in artificial intelligence. We consider extensions of the language of definite logic programs by classical (strong) negation, disjunction, and some modal operators and show how each of the added features extends the representational power of the language.
The contract net protocol: high-level communication and control in a distributed problem solver The contract net protocol has been developed to specify problem-solving communication and control for nodes in a distributed problem solver. Task distribution is affected by a negotiation process, a discussion carried on between nodes with tasks to be executed and nodes that may be able to execute those tasks.
Efficient sparse coding algorithms Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that cap- ture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimiza- tion problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field sur- round suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.
Synchronized Disk Interleaving A group of disks may be interleaved to speed up data transfers in a manner analogous to the speedup achieved by main memory interleaving. Conventional disks may be used for interleaving by spreading data across disks and by treating multiple disks as if they were a single one. Furthermore, the rotation of the interleaved disks may be synchronized to simplify control and also to optimize performance. In addition, check- sums may be placed on separate check-sum disks in order to improve reliability. In this paper, we study synchronized disk interleaving as a high-performance mass storage system architecture. The advantages and limitations of the proposed disk interleaving scheme are analyzed using the M/G/1 queueing model and compared to the conventional disk access mechanism.
A Completeness Result for SLDNF-Resolution Because of the possibility of floundering and infinite derivations, SLDNF-resolution is, in general, not complete. The classical approach [17] to get a completeness result is to restrict the attention to normal programs P and normal goals G, such that P or {G} is allowed and P is hierarchical. Unfortunately, the class of all normal programs and all normal goals meeting these requirements is not powerful enough to be of great practical importance. But after refining the concept of allowedness by taking modes [12] into account, we can broaden the notion of a hierarchical program, and thereby define a subclass of the class of normal programs and normal goals which is powerful enough to compute all primitive recursive functions without losing the completeness of SLDNF-resolution.
A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system.
iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings.
When Multivariate Forecasting Meets Unsupervised Feature Learning - Towards a Novel Anomaly Detection Framework for Decision Support. Many organizations adopt information technologies to make intelligent decisions during operations. Time-series data plays a crucial role in supporting such decision making processes. Though current studies on time-series based decision making provide reasonably well results, the anomaly detection essence underling most of the scenarios and the plenitude of unlabeled data are largely overlooked and left unexplored. We argue that by using multivariate forecasting and unsupervised feature learning, these two important research gaps could be filled. We carried out two experiments in this study to testify our approach and the results showed that decision support performance was significantly improved. We also proposed a novel framework to integrate the two methods so that our approach may be generalized to a larger problem domain. We discussed the advantages, the limitations and the future work of our study. Both practical and theoretical contributions were also discussed in the paper. © 2012 by the AIS/ICIS Administrative Office All rights reserved.
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
1.2
0.1
0.005128
0.001961
0
0
0
0
0
0
0
0
0
0
Learning to Disentangle Factors of Variation with Manifold Interaction.
Modeling Deep Temporal Dependencies with Recurrent Grammar Cells"". We propose modeling time series by representing the transformations that take a frame at time t to a frame at time t+1. To this end we show how a bi-linear model of transformations, such as a gated autoencoder, can be turned into a recurrent network, by training it to predict future frames from the current one and the inferred transformation using backprop-through-time. We also show how stacking multiple layers of gating units in a recurrent pyramid makes it possible to represent the "syntax" of complicated time series, and that it can outperform standard recurrent neural networks in terms of prediction accuracy on a variety of tasks.
Multi-Task Deep Neural Network For Multi-Label Learning This paper proposes a multi-task deep neural network (MTDNN) architecture to handle the multi-label learning problem, in which each label learning is defined as a binary classification task, i.e., a positive class for "an instance owns this label" and a negative class for "an instance does not own this label". Multi-label learning is accordingly transformed to multiple binary-class classification tasks. Considering that a deep neural nets (DNN) architecture can learn good intermediate representations shared across tasks, we generalize one classification task of traditional DNN into multiple binary classification tasks through defining the output layer with a negative class node and a positive class node for each label. After a similar pretraining process to deep belief nets, we redefine the label assignment error of MT-DNN and perform the backpropagation algorithm to fine-tune the network. To evaluate the proposed model, we carry out image annotation experiments on two public image datasets, with 2000 images and 30,000 images respectively. The experiments demonstrate that the proposed model achieves the state-of-the-art performance.
NICE: Non-linear Independent Components Estimation. We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. We parametrize this transformation so that computing the Jacobian determinant and inverse transform is trivial, yet we maintain the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood, which is tractable. Unbiased ancestral sampling is also easy. We show that this approach yields good generative models on four image datasets and can be used for inpainting.
A Novel Semi-Supervised Deep Learning Framework for Affective State Recognition on EEG Signals Nowadays the rapid development in the area of human-computer interaction has given birth to a growing interest on detecting different affective states through smart devices. By using the modern sensor equipment, we can easily collect electroencephalogram (EEG) signals, which capture the information from central nervous system and are closely related with our brain activities. Through the training on EEG signals, we can make reasonable analysis on people's affection, which is very promising in various areas. Unfortunately, the special properties of EEG dataset have brought difficulties for conventional machine learning methods. The main reasons lie in two aspects: the small set of labeled samples and the noisy channel problem. To overcome these difficulties and successfully identify the affective states, we come up with a novel semi-supervised deep structured framework. Compared with previous deep learning models, our method is more adapted to the EEG classification problem. We first adopt a two-level procedure, which involves both supervised label information and unsupervised structure information to jointly make decision on channel selection. And then, we add a generative Restricted Boltzmann Machine (RBM) model for the classification task, and use the training objectives of generative learning and unsupervised learning to jointly regularize the discriminative training. Finally, we extend it to the active learning scenario, which solves the costly labeling problem. The experiments conducted on real EEG dataset have shown both the convincing result on critical channel selection and the superiority of our method over multiple baselines for the affective state recognition.
Human-level control through deep reinforcement learning. The theory of reinforcement learning provides a normative account', deeply rooted in psychological' and neuroscientifie perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems4'5, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms'. While reinforcement learning agents have achieved some successes in a variety of domains", their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks'" to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games". We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Learning internal representations Probably the most important problem in machinelearning is the preliminary biasing of alearner's hypothesis space so that it is smallenough to ensure good generalisation fromreasonable training sets, yet large enough thatit contains a good solution to the problem beinglearnt. In this paper a mechanism for automatically learning or biasing the learner's hypothesisspace is introduced. It works by firstlearning an appropriate internal representation for a learning environment and then...
A Robust Deep Model for Improved Classification of AD/MCI Patients Accurate classification of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight co-adaptation, which is a typical cause of over-fitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multi-task learning strategy into the deep learning framework. We applied the proposed method to the ADNI data set and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods.
A Scalable Hierarchical Distributed Language Model Neural probabilistic language models (NPLMs) have been shown to be competi- tive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the non- hierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a wo rd tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models.
Robust Graph Mode Seeking by Graph Shift
Optimal disk allocation for partial match queries The problem of disk allocation addresses the issue of how to distribute a file on several disks in order to maximize concurrent disk accesses in response to a partial match query. In this paper a coding-theoretic analysis of this problem is presented, and both necessary and sufficient conditions for the existence of strictly optimal allocation methods are provided. Based on a class of optimal codes, known as maximum distance separable codes, strictly optimal allocation methods are constructed. Using the necessary conditions proved, we argue that the standard definition of strict optimality is too strong and cannot be attained, in general. Hence, we reconsider the definition of optimality. Instead of basing it on an abstract definition that may not be attainable, we propose a new definition based on the best possible allocation method. Using coding theory, allocation methods that are optimal according to our proposed criterion are developed.
Meta-ViPIOS: Harness Distributed I/O Resources with ViPIOS Two factors strongly inuenced the research in high performancecomputing in the last few years, the I/O bottleneckand cluster systems. Firstly, for many supercomputing applicationsthe limiting factor is not the number of availableCPUs anymore, but the bandwidth of the disk I/O system.Secondly, a shift from the classical, costly supercomputersystems to aordable clusters of workstations is apparent,which allows problem solutions to a much lower price.As a result we present in this paper...
Learning to classify parallel input/output access patterns Input/output performance on current parallel file systems is sensitive to a good match of application access patterns to file system capabilities. Automatic input/output access pattern classification can determine application access patterns at execution time, guiding adaptive file system policies. In this paper, we examine and compare two novel input/output access pattern classification methods based on learning algorithms. The first approach uses a feedforward neural network previously trained on access pattern benchmarks to generate qualitative classifications. The second approach uses hidden Markov models trained on access patterns from previous executions to create a probabilistic model of input/output accesses. In a parallel application, access patterns can be recognized at the level of each local thread or as the global interleaving of all application threads. Classification of patterns at both levels is important for parallel file system performance; we propose a method for forming global classifications from local classifications. We present results from parallel and sequential benchmarks and applications that demonstrate the viability of this approach.
Privacy-preserving restricted boltzmann machine. With the arrival of the big data era, it is predicted that distributed data mining will lead to an information technology revolution. To motivate different institutes to collaborate with each other, the crucial issue is to eliminate their concerns regarding data privacy. In this paper, we propose a privacy-preserving method for training a restricted boltzmann machine (RBM). The RBM can be got without revealing their private data to each other when using our privacy-preserving method. We provide a correctness and efficiency analysis of our algorithms. The comparative experiment shows that the accuracy is very close to the original RBM model.
1.052306
0.053725
0.05
0.0185
0.006942
0.002185
0.000135
0.000017
0.000007
0.000001
0
0
0
0
Recursive Polynomial Reductions for Classical Planning. Reducing accidental complexity in planning problems is a well-established method for increasing efficiency of classical planning. Removal of superfluous facts and actions, and problem transformation by recursive macro actions are representatives of such methods working directly on input planning problems. Despite of its general applicability and thorough theoretical analysis, there is only a sparse amount of experimental results. In this paper, we adopt selected reduction methods from literature and amend them with a generalization-based reduction scheme and auxiliary reductions. We show that all presented reductions are polynomial in time to the size of an input problem. All reductions applied in a recursive manner produce only safe (solution preserving) abstractions of the problem, and they can implicitly represent exponentially long plans in a compact form. Experimentally, we validate efficiency of the presented reductions on the IPC benchmark set and show average 24% reduction over all problems. Additionally, we experimentally analyze the trade-off between increase of coverage and decrease of the plan quality.
Tractable Cost-Optimal Planning Over Restricted Polytree Causal Graphs Causal graphs are widely used to analyze the complexity of planning problems. Many tractable classes have been identified with their aid and state-of-the-art heuristics have been derived by exploiting such classes. In particular, Katz and Keyder have studied causal graphs that are hourglasses (which is a generalization of forks and inverted-forks) and shown that the corresponding cost-optimal planning problem is tractable under certain restrictions. We continue this work by studying polytrees (which is a generalization of hourglasses) under similar restrictions. We prove tractability of cost-optimal planning by providing an algorithm based on a novel notion of variable isomorphism. Our algorithm also sheds light on the k-consistency procedure for identifying unsolvable planning instances. We speculate that this may, at least partially, explain why merge-and-shrink heuristics have been successful for recognizing unsolvable instances.
Symmetry Reduction for SAT Representations of Transition Systems Symmetries are inherent in systems that consist of several interchangeable objects or components. When reasoning about such systems, big computational savings can be ob- tained if the presence of symmetries is recognized. In ear- lier work, symmetries in constraint satisfaction problems have been handled by introducing symmetry-breaking con- straints. In reasoning about transition systems, notably in model-checking and reachability analysis in computer-aided verification, symmetries have been handled by symmetry re- duction algorithms that eliminate redundant search caused by symmetries. In this work, we investigate symmetry handling in a problem in the intersection of these two areas: handling symmetries in representations of transition systems in the propositional logic. The problem shows up in representations of AI plan- ning as a satisfiability problem, and in recent approaches to model-checking that represent transition systems as propo- sitional formulae. Symmetry-breaking constraints can be added to the propositional logic representation of transition sequences for removing all the symmetry at one point of time, but removing symmetry from the whole transition sequence is much more difficult, and has not been addressed in earlier work. We present a solution to the problem.
Structural Patterns Heuristics via Fork Decomposition We consider a generalization of the PDB homomorphism abstractions to what is called "structural patterns". The basic idea is in abstracting the problem in hand into provably tractable fragments of optimal planning, alleviating by that the constraint of PDBs to use projections of only low dimensionality. We introduce a general framework for additive structural patterns based on decomposing the problem along its causal graph, suggest a concrete non-parametric instance of this framework called fork-decomposition, and formally show that the admissible heuristics induced by the latter abstractions provide state-of-the- art worst-case informativeness guarantees on several standard domains.
Structure and Complexity in Planning with Unary Operators Unary operator domains - i.e., domains in which operators have a single effect - arise naturally in many control problems. In its most general form, the problem of strips planning in unary operator domains is known to be as hard as the general strips planning problem - both are PSPACE-complete. However, unary operator domains induce a natural structure, called the domain's causal graph. This graph relates between the preconditions and effect of each domain operator. Causal graphs were exploited by Williams and Nayak in order to analyze plan generation for one of the controllers in NASA's Deep-Space One spacecraft. There, they utilized the fact that when this graph is acyclic, a serialization ordering over any subgoal can be obtained quickly. In this paper we conduct a comprehensive study of the relationship between the structure of a domain's causal graph and the complexity of planning in this domain. On the positive side, we show that a non-trivial polynomial time plan generation algorithm exists for domains whose causal graph induces a polytree with a constant bound on its node indegree. On the negative side, we show that even plan existence is hard when the graph is a directed-path singly connected DAG. More generally, we show that the number of paths in the causal graph is closely related to the complexity of planning in the associated domain. Finally we relate our results to the question of complexity of planning with serializable subgoals.
Logic Programming and Negation: A Survey. We survey here various approaches which were proposed to incorporate negation in logicprograms. We concentrate on the proof-theoretic and model-theoretic issues and the relationshipsbetween them.1991 Mathematics Subject Classification: 68Q40, 68T15.CR Categories: F.3.2., F.4.1, H.3.3, I.2.3.Keywords and Phrases: negation, general logic programs, non-monotonic reasoning.Notes. The work of the first author was partly supported by ESPRIT Basic Research Action6810 (Compulog 2). The work...
On the Desirability of Acyclic Database Schemes A class of database schemes, called acychc, was recently introduced. It is shown that this class has a number of desirable properties. In particular, several desirable properties that have been studied by other researchers m very different terms are all shown to be eqmvalent to acydicity. In addition, several equivalent charactenzauons of the class m terms of graphs and hypergraphs are given, and a smaple algorithm for determining acychclty is presented. Also given are several eqmvalent characterizations of those sets M of multivalued dependencies such that M is the set of muRlvalued dependencies that are the consequences of a given join dependency. Several characterizations for a conflict-free (in the sense of Lien) set of muluvalued dependencies are provided.
Consensus and Cooperation in Networked Multi-Agent Systems? This paper provides a theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow, robustness to changes in network topology due to link/node failures, time-delays, and performance guarantees. An overview of basic concepts of information consensus in networks and methods of convergence and performance analys...
Unsupervised Learning of Multiple Motifs in Biopolymers Using Expectation Maximization The MEME algorithm extends the expectation maximization (EM) algorithm for identifying motifs in unaligned biopolymer sequences. The aim of MEME is to discover new motifs in a set of biopolymer sequences where little or nothing is known in advance about any motifs that may be present. MEME innovations expand the range of problems which can be solved using EM and increase the chance of finding good solutions. First, subsequences which actually occur in the biopolymer sequences are used as starting points for the EM algorithm to increase the probability of finding globally optimal motifs. Second, the assumption that each sequence contains exactly one occurrence of the shared motif is removed. This allows multiple appearances of a motif to occur in any sequence and permits the algorithm to ignore sequences with no appearance of the shared motif, increasing its resistance to noisy data. Third, a method for probabilistically erasing shared motifs after they are found is incorporated so that several distinct motifs can be found in the same set of sequences, both when different motifs appear in different sequences and when a single sequence may contain multiple motifs. Experiments show that MEME can discover both the CRP and LexA binding sites from a set of sequences which contain one or both sites, and that MEME can discover both the −10 and −35 promoter regions in a set of E. coli sequences.
Recognizing frozen variables in constraint satisfaction problems In constraint satisfaction problems over finite domains, some variables can be frozen, that is, they take the same value in all possible solutions. We study the complexity of the problem of recognizing frozen variables with restricted sets of constraint relations allowed in the instances. We show that the complexity of such problems is determined by certain algebraic properties of these relations. Under the assumption that NP ≠ coNP (and consequently PTIME ≠ NP), we characterize all tractable problems, and describe large classes of NP-complete, coNP-complete, and DP-complete problems. As an application of these results, we completely classify the complexity of the problem in two cases: (1) with domain size 2; and (2) when all unary relations are present. We also give a rough classification for domain size 3.
Simultaneous Pipelining in QPipe: Exploiting Work Sharing Opportunities Across Queries Data warehousing and scientific database applications operate on massive datasets and are characterized by complex queries accessing large portions of the database. Concurrent queries often exhibit high data and computation overlap, e.g., they access the same relations on disk, compute similar aggregates, or share intermediate results. Unfortunately, run-time sharing in modern database engines is limited by the paradigm of invoking an independent set of operator instances per query, potentially missing sharing opportunities if the buffer pool evicts data early.
Reasoning About Actions in Narrative Understanding Reasoning about actions has been a focus of interest in AI from the beginning and continues to receive attention. Rut the range of situations considered has been rather narrow and falls well short of what is needed for understanding natural language. Language understanding requires sophisticated reasoning about actions and events and the world's languages employ a variety of grammatical and lexical devices to construe, direct attention and focus on, and control inferences about actions and events. We implemented a neurally inspired computational model that is able to reason about, linguistic action and event descriptions, such as those found in news stories. The system uses an active. event representation that also seems to provide natural and cognitiveIy motivated solutions to classical problems in logical theories of reasoning about actions. For logical approaches to reasoning about actions, we suggest, that looking at story understanding sets up fairly strong desiderata both in terms of the fine-grained event and action distinctions and the kinds of real-time inferences required.
Scheduling parallel I/O operations The I/O bottleneck in parallel computer systems has recently begun receiving increasing interest. Most attention has focused on improving the performance of I/O devices using fairly low-level parallelism in techniques such as disk striping and interleaving. Widely applicable solutions, however, will require an integrated approach which addresses the problem at multiple system levels, including applications, systems software, and architecture. We propose that within the context of such an integrated approach, scheduling parallel I/O operations will become increasingly attractive and can potentially provide substantial performance benefits.We describe a simple I/O scheduling problem and present approximate algorithms for its solution. The costs of using these algorithms in terms of execution time, and the benefits in terms of reduced time to complete a batch of I/O operations, are compared with the situations in which no scheduling is used, and in which an optimal scheduling algorithm is used. The comparison is performed both theoretically and experimentally. We have found that, in exchange for a small execution time overhead, the approximate scheduling algorithms can provide substantial improvements in I/O completion times.
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
1.2
0.2
0.033333
0.02
0.006667
0
0
0
0
0
0
0
0
0
Redundantly grouped cross-object coding for repairable storage The problem of replenishing redundancy in erasure code based fault-tolerant storage has received a great deal of attention recently, leading to the design of several new coding techniques [3], aiming at a better repairability. In this paper, we adopt a different point of view, by proposing to code across different already encoded objects to alleviate the repair problem. We show that the addition of parity pieces - the simplest form of coding - significantly boosts repairability without sacrificing fault-tolerance for equivalent storage overhead. The simplicity of our approach as well as its reliance on time-tested techniques makes it readily deployable.
A new intra-disk redundancy scheme for high-reliability RAID storage systems in the presence of unrecoverable errors Today's data storage systems are increasingly adopting low-cost disk drives that have higher capacity but lower reliability, leading to more frequent rebuilds and to a higher risk of unrecoverable media errors. We propose an efficient intradisk redundancy scheme to enhance the reliability of RAID systems. This scheme introduces an additional level of redundancy inside each disk, on top of the RAID redundancy across multiple disks. The RAID parity provides protection against disk failures, whereas the proposed scheme aims to protect against media-related unrecoverable errors. In particular, we consider an intradisk redundancy architecture that is based on an interleaved parity-check coding scheme, which incurs only negligible I/O performance degradation. A comparison between this coding scheme and schemes based on traditional Reed--Solomon codes and single-parity-check codes is conducted by analytical means. A new model is developed to capture the effect of correlated unrecoverable sector errors. The probability of an unrecoverable failure associated with these schemes is derived for the new correlated model, as well as for the simpler independent error model. We also derive closed-form expressions for the mean time to data loss of RAID-5 and RAID-6 systems in the presence of unrecoverable errors and disk failures. We then combine these results to characterize the reliability of RAID systems that incorporate the intradisk redundancy scheme. Our results show that in the practical case of correlated errors, the interleaved parity-check scheme provides the same reliability as the optimum, albeit more complex, Reed--Solomon coding scheme. Finally, the I/O and throughput performances are evaluated by means of analysis and event-driven simulation.
EVENODD: an efficient scheme for tolerating double disk failures in RAID architectures We present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (i.e., two extra disks) is based on Reed-Solomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error
A case for redundant arrays of inexpensive disks (RAID) Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days.
Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented.
Efficient sparse coding algorithms Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that cap- ture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimiza- tion problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field sur- round suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.
Synchronized Disk Interleaving A group of disks may be interleaved to speed up data transfers in a manner analogous to the speedup achieved by main memory interleaving. Conventional disks may be used for interleaving by spreading data across disks and by treating multiple disks as if they were a single one. Furthermore, the rotation of the interleaved disks may be synchronized to simplify control and also to optimize performance. In addition, check- sums may be placed on separate check-sum disks in order to improve reliability. In this paper, we study synchronized disk interleaving as a high-performance mass storage system architecture. The advantages and limitations of the proposed disk interleaving scheme are analyzed using the M/G/1 queueing model and compared to the conventional disk access mechanism.
A Completeness Result for SLDNF-Resolution Because of the possibility of floundering and infinite derivations, SLDNF-resolution is, in general, not complete. The classical approach [17] to get a completeness result is to restrict the attention to normal programs P and normal goals G, such that P or {G} is allowed and P is hierarchical. Unfortunately, the class of all normal programs and all normal goals meeting these requirements is not powerful enough to be of great practical importance. But after refining the concept of allowedness by taking modes [12] into account, we can broaden the notion of a hierarchical program, and thereby define a subclass of the class of normal programs and normal goals which is powerful enough to compute all primitive recursive functions without losing the completeness of SLDNF-resolution.
A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system.
iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.0125
0.002857
0.000219
0
0
0
0
0
0
0
0
0
0
Caching for bursts (C-Burst): let hard disks sleep well and work energetically High energy consumption has become a critical challenge in all kinds of computer systems. Hardware-supported Dynamic Power Management (DPM) provides a mechanism to save disk energy by transitioning an idle disk to a low-power mode. However, the achievable disk energy saving is mainly dependent on the pattern of I/O requests received at the disk. In particular, for a given number of requests, a bursty disk access pattern serves as a foundation for energy optimization. Aggressive prefetching has been used to increase disk access burstiness and extend disk idle intervals, while caching, a critical component in buffer cache management, has not been paid a specific attention. In the absence of cooperation from caching, the attempt to create bursty disk accesses would often be disturbed due to improper replacement decision made by energy unaware caching policies. In this paper, we present the design of a set of comprehensive energy-aware caching schemes, called C-Burst, and its implementation in Linux kernel 2.6.21. Our caching schemes leverage the 'filtering' effect of buffer cache to effectively reshape the disk access stream to a bursty pattern for energy saving. The experiments under various scenarios show that C-Burst schemes can achieve up to 35% disk energy saving with minimal performance loss.
Joint power management of memory and disk The paper presents a scheme to combine memory and power management for achieving better energy reduction. Our method periodically adjusts the size of physical memory and the timeout value to shut down a hard disk for reducing the average power consumption. We use Pareto distributions to model the distributions of idle time. The parameters of the distributions are adjusted at run-time for calculating the corresponding timeout value of the disk power management. The memory size is changed based on the inclusion property to predict the number of disk accesses at different memory sizes. Experimental results show more than 50% energy savings compared to a 2-competitive fixed-timeout method.
PS-BC: Power-saving considerations in design of buffer caches serving heterogeneous storage devices Under a replacement policy, existing operating systems identify and maintain most frequently used storage data in buffer caches located in main memory, aiming at low-latency I/O data accesses. However, replacement policies can also strongly affect energy consumptions of various connected storage devices, which has not been a consideration in the design and implementation of buffer cache management. In this paper, we present a system framework for an energy-aware buffer cache replacement, called PS-BC (power-saving buffer cache). By considering several critical factors affecting system energy consumption, PS-BC can effectively improve system energy efficiency, while it is able to flexibly incorporate conventional performance-oriented buffer cache replacement policies for different performance objectives. Our experimental studies based on a trace-driven simulation show that the PS-BC framework embedded with the CLOCK replacement policy can achieve an energy saving rate of up to 32.5% with a minimal overhead for various workloads.
Memory resource allocation for file system prefetching: from a supply chain management perspective As an important technique to hide disk I/O latency, prefetching has been widely studied, and dynamic adaptive prefetching techniques have been deployed in diverse storage environments. However, two issues are not well addressed by previous research: (1) how to handle the prefetching resource allocation between concurrent sequential access streams with different request rates, and (2) how to coordinate prefetching at multiple levels in the data access path. Interestingly, we found that these problems bear a strong resemblance to situations long studied in the field of supply chain management (SCM), used by retailers such as Wal-Mart. In this paper, we demonstrate how to perform the problem mapping and then apply SCM principles in practice, particularly from the branch of inventory theory, to improve data prefetching performance in storage systems. More specifically, we applied (1) two SCM policies to dynamically configure the sequential prefetching parameters, and (2) an SCM solution to correct the access pattern information distortion in multi-level prefetching. We implemented these SCM-based strategies in the Linux kernel prefetching algorithm and a multi-level storage simulator, and evaluated the performance with three types of workloads. The results indicate that the SCM approaches are able to generate up to a 55.0% of performance improvement for a real-world server workload benchmark, and up to 33.3% for a combination of Linux I/O-intensive applications.
PARAID: a gear-shifting power-aware RAID Reducing power consumption for server computers is important, since increased energy usage causes increased heat dissipation, greater cooling requirements, reduced computational density, and higher operating costs. For a typical data center, storage accounts for 27% of energy consumption. Conventional server-class RAIDs cannot easily reduce power because loads are balanced to use all disks even for light loads. We have built the Power-Aware RAID (PARAID), which reduces energy use of commodity server-class disks without specialized hardware. PARAID uses a skewed striping pattern to adapt to the system load by varying the number of powered disks. By spinning disks down during light loads, PARAID can reduce power consumption, while still meeting performance demands, by matching the number of powered disks to the system load. Reliability is achieved by limiting disk power cycles and using different RAID encoding schemes. Based on our five-disk prototype, PARAID uses up to 34% less power than conventional RAIDs, while achieving similar performance and reliability.
LIRS: an efficient low inter-reference recency set replacement policy to improve buffer cache performance Although LRU replacement policy has been commonly used in the buffer cache management, it is well known for its inability to cope with access patterns with weak locality. Previous work, such as LRU-K and 2Q, attempts to enhance LRU capacity by making use of additional history information of previous block references other than only the recency information used in LRU. These algorithms greatly increase complexity and/or can not consistently provide performance improvement. Many recently proposed policies, such as UBM and SEQ, improve replacement performance by exploiting access regularities in references. They only address LRU problems on certain specific and well-defined cases such as access patterns like sequences and loops. Motivated by the limits of previous studies, we propose an efficient buffer cache replacement policy, called (LIRS). LIRS effectively addresses the limits of LRU by using recency to evaluate Inter-Reference Recency (IRR) for making a replacement decision. This is in contrast to what LRU does: directly using recency to predict next reference timing. At the same time, LIRS almost retains the same simple assumption of LRU to predict future access behavior of blocks. Our objectives are to effectively address the limits of LRU for a general purpose, to retain the low overhead merit of LRU, and to outperform those replacement policies relying on the access regularity detections. Conducting simulations with a variety of traces and a wide range of cache sizes, we show that LIRS significantly outperforms LRU, and outperforms other existing replacement algorithms in most cases. Furthermore, we show that the additional cost for implementing LIRS is trivial in comparison with LRU.
A comparison of FFS disk allocation policies The 4.4BSD file system includes a new algorithm for allocating disk blocks to files. The goal of this algorithm is to improve file clustering, increasing the amount of sequential I/O when reading or writing files, thereby improving file system performance. In this paper we study the effectiveness of this algorithm at reducing file system fragmentation. We have created a program that artificially ages a file system by replaying a workload similar to that experienced by a real file system. We used this program to evaluate the effectiveness of the new disk allocation algorithm by replaying ten months of activity on two file systems that differed only in the disk allocation algorithms that they used. At the end of the ten month simulation, the file system using the new allocation algorithm had approximately half the fragmentation of a similarly aged file system that used the traditional disk allocation algorithm. Measuring the performance difference between the two file systems by reading and writing the same set of files on the two systems showed that this decrease in fragmentation improved file write throughput by 20% and read throughput by 32%. In certain test cases, the new allocation algorithm provided a performance improvement of greater than 50%.
2008 USENIX Annual Technical Conference, Boston, MA, USA, June 22-27, 2008. Proceedings
WOW: wise ordering for writes - combining spatial and temporal locality in non-volatile caches Write caches using fast, non-volatile storage are now widely used in modern storage controllers since they enable hiding latency on writes. Effective algorithms for write cache management are extremely important since (i) in RAID-5, due to read-modify-write and parity updates, each write may cause up to four separate disk seeks while a read miss causes only a single disk seek; and (ii) typically, write cache size is much smaller than the read cache size - a proportion of 1 : 16 is typical. A write caching policy must decide: what data to destage. On one hand, to exploit temporal locality, we would like to destage data that is least likely to be re-written soon with the goal of minimizing the total number of destages. This is normally achieved using a caching algorithm such as LRW (least recently written). However, a read cache has a very small uniform cost of replacing any data in the cache, whereas the cost of destaging depends on the state of the disk heads. Hence, on the other hand, to exploit spatial locality, we would like to destage writes so as to minimize the average cost of each destage. This can be achieved by using a disk scheduling algorithm such as CSCAN, that destages data in the ascending order of the logical addresses, at the higher level of the write cache in a storage controller. Observe that LRW and CSCAN focus, respectively, on exploiting either temporal or spatial locality, but not both simultaneously. We propose a new algorithm, namely, Wise Ordering for Writes (WOW), for write cache management that effectively combines and balances temporal and spatial locality. Our experimental set-up consisted of an IBM xSeries 345 dual processor server running Linux that is driving a (software) RAID-5 or RAID-10 array using a workload akin to Storage Performance Council's widely adopted SPC-1 benchmark. In a cache-sensitive configuration on RAID-5, WOW delivers peak throughput that is 129% higher than CSCAN and 9% higher than LRW. In a cache-insensitive configuration on RAID-5, WOW and CSCAN deliver peak throughput that is 50% higher than LRW. For a random write workload with nearly 100% misses, on RAID-10, with a cache size of 64K, 4KB pages (256MB), WOW and CSCAN deliver peak throughput that is 200% higher than LRW. In summary, WOW has better or comparable peak throughput to the best of CSCAN and LRW across a wide gamut of write cache sizes and workload configurations. In addition, even at lower throughputs, WOW has lower average response times than CSCAN and LRW.
The DASDBS Project: Objectives, Experiences, and Future Prospects A retrospective of the Darmstadt database system project, also known as DASDBS, is presented. The project is aimed at providing data management support for advanced applications, such as geo-scientific information systems and office automation. Similar to the dichotomy of RSS and RDS in System R, a layered architectural approach was pursued: a storage management kernel serves as the lowest common denominator of the requirements of the various applications classes, and a family of application-oriented front-ends provides semantically richer functions on top of the kernel. The lessons that were learned from building the DASDBS system are discussed. Particular emphasis is placed on the following issues: the role of nested relations, the experiences with using object buffers for coupling the system with the programming-language environment and the learning process in implementing multilevel transactions.
NP is as easy as detecting unique solutions For all known NP-complete problems the number of solutions in instances having solutions may vary over an exponentially large range. Furthermore, most of the well-known ones, such as satisfiability, are parsimoniously interreducible, and these can have any number of solutions between zero and an exponentially large number. It is natural to ask whether the inherent intractability of NP-complete problems is caused by this wide variation. In this paper we give a negative answer to this using randomized reductions. We show that the problems of distinguishing between instances of SAT having zero or one solution, or finding solutions to instances of SAT having unique solutions, are as hard as SAT itself. Several corollaries about the difficulty of specific problems follow. For example if the parity of the number of solutions of SAT can be computed in RP then NP = RP. Some further problems can be shown to be hard for NP or DP via randomized reductions.
Many-layered learning. We explore incremental assimilation of new knowledge by sequential learning. Of particular interest is how a network of many knowledge layers can be constructed in an on-line manner, such that the learned units represent building blocks of knowledge that serve to compress the overall representation and facilitate transfer. We motivate the need for many layers of knowledge, and we advocate sequential learning as an avenue for promoting the construction of layered knowledge structures. Finally, our novel STL algorithm demonstrates a method for simultaneously acquiring and organizing a collection of concepts and functions as a network from a stream of unstructured information.
On the relations between stable and well-founded semantics of logic programs We study the relations between stable and well-founded semantics of logic programs. 1. We show that stable semantics can be defined in the same way as well-founded semantics based on the basic notion of unfounded sets. Hence, stable semantics can be considered as “two-valued well-founded semantics”. 2. An axiomatic characterization of stable and well-founded semantics of logic programs is given by a new completion theory, called strong completion . Similar to the Clark's completion, the strong completion can be interpreted in either two-valued or three-valued logic. We show that ◦ Two-valued strong completion specifies the stable semantics. ◦ Three-valued strong completion specifies the well-founded semantics. 3. We study the equivalence between stable semantics and well-founded semantics. At first, we prove the equivalence between the two semantics for strict programs. Then we introduce the bottom-stratified and top-strict condition generalizing both the stratifiability and the strictness, and show that the new condition is sufficient for the equivalence between stable and well-founded semantics. Further, we show that the call-consistency condition is sufficient for the existence of at least one stable model.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.109319
0.02775
0.02775
0.010097
0.005592
0.001437
0.000372
0.000093
0.00003
0.000003
0
0
0
0
Exactly Sparse Extended Information Filters for Feature-based SLAM Recent research concerning the Gaussian canonical form for Simultaneous Localization and Mapping (SLAM) has given rise to a handful of algorithms that attempt to solve the SLAM scalability problem for arbitrarily large environments. One such estimator that has received due attention is the Sparse Extended Information Filter (SEIF) proposed by Thrun et al., which is reported to be nearly constant time, irrespective of the size of the map. The key to the SEIF's scalability is to prune weak links in what is a dense information (inverse covariance) matrix to achieve a sparse approximation that allows for efficient, scalable SLAM. We demonstrate that the SEIF sparsification strategy yields error estimates that are overconfident when expressed in the global reference frame, while empirical results show that relative map consistency is maintained. In this paper, we propose an alternative scalable estimator based on an information form that maintains sparsity while preserving consistency. The paper describes a method for controlling the population of the information matrix, whereby we track a modified version of the SLAM posterior, essentially by ignoring a small fraction of temporal measurements. In this manner, the Exactly Sparse Extended Information Filter (ESEIF) performs inference over a model that is conservative relative to the standard Gaussian distribution. We compare our algorithm to the SEIF and standard EKF both in simulation as well as on two nonlinear datasets. The results convincingly show that our method yields conservative estimates for the robot pose and map that are nearly identical to those of the EKF.
Consistency Analysis and Improvement of Vision-aided Inertial Navigation In this paper, we study estimator inconsistency in vision-aided inertial navigation systems (VINS) from the standpoint of system’s observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, which results in smaller uncertainties, larger estimation errors, and divergence. We develop an observability constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information gain and reducing inconsistency. This framework is applicable to several variants of the VINS problem such as visual simultaneous localization and mapping (V-SLAM), as well as visual-inertial odometry using the multi-state constraint Kalman filter (MSC-KF). Our analysis, along with the proposed method to reduce inconsistency, are extensively validated with simulation trials and real-world experimentation.
Power-SLAM: a linear-complexity, anytime algorithm for SLAM In this paper, we present an extended Kalman filter (EKF)-based estimator for simultaneous localization and mapping (SLAM) with processing requirements that are linear in the number of features in the map. The proposed algorithm, called the Power-SLAM, is based on three key ideas. Firstly, by introducing the Global Map Postponement method, approximations necessary for ensuring linear computational complexity of EKF-based SLAM are delayed over multiple time steps. Then by employing the Power Method, only the most informative of the Kalman vectors, generated during the postponement phase, are retained for updating the covariance matrix. This ensures that the information loss during each approximation epoch is minimized. Next, linear-complexity, rank-2 updates, that minimize the trace of the covariance matrix, are employed to increase the speed of convergence of the estimator. The resulting estimator, in addition to being conservative as compared to the standard EKF, has processing requirements that can be adjusted depending on the availability of computational resources. Lastly, simulation and experimental results are presented that demonstrate the accuracy of the proposed algorithm (Power-SLAM) when compared to the standard EKF-based SLAM with quadratic computational cost and two linear-complexity competing alternatives.
Visual-Inertial Monocular SLAM With Map Reuse. In recent years there have been excellent results in visual-inertial odometry techniques, which aim to compute the incremental motion of the sensor with high accuracy and robustness. However, these approaches lack the capability to close loops and trajectory estimation accumulates drift even if the sensor is continually revisiting the same place. In this letter, we present a novel tightly coupled ...
FrameSLAM: From Bundle Adjustment to Real-Time Visual Mapping Many successful indoor mapping techniques employ frame-to-frame matching of laser scans to produce detailed local maps as well as the closing of large loops. In this paper, we propose a framework for applying the same techniques to visual imagery. We match visual frames with large numbers of point features, using classic bundle adjustment techniques from computational vision, but we keep only relative frame pose information (a skeleton). The skeleton is a reduced nonlinear system that is a faithful approximation of the larger system and can be used to solve large loop closures quickly, as well as forming a backbone for data association and local registration. We illustrate the workings of the system with large outdoor datasets (10 km), showing large-scale loop closure and precise localization in real time.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
On the facial structure of set packing polyhedra In this paper we address ourselves to identifying facets of the set packing polyhedron, i.e., of the convex hull of integer solutions to the set covering problem with equality constraints and/or constraints of the form “?”. This is done by using the equivalent node-packing problem derived from the intersection graph associated with the problem under consideration. First, we show that the cliques of the intersection graph provide a first set of facets for the polyhedron in question. Second, it is shown that the cycles without chords of odd length of the intersection graph give rise to a further set of facets. A rather strong geometric property of this set of facets is exhibited.
Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM 2008, Napa Valley, California, USA, October 26-30, 2008
Safety, Visibility, and Performance in a Wide-Area File System As mobile clients travel, their costs to reach home filing services change, with serious performance implications. Current file systems mask these performance problems by reducing the safety of updates, their visibility, or both. This is the result of combining the propagation and notification of updates from clients to servers.Fluid Replication separates these mechanisms. Client updates are shipped to nearby replicas, called WayStations, rather than remote servers, providing inexpensive safety. WayStations and servers periodically exchange knowledge of updates through reconciliation, providing a tight bound on the time until updates are visible. Reconciliation is non-blocking, and update contents are not propagated immediately; propagation is deferred to take advantage of the low incidence of sharing in file systems.Our measurements of a Fluid Replication prototype show that update performance is completely independent of wide-area networking costs, at the expense of increased sharing costs. This places the costs of sharing on those who require it, preserving common case performance. Furthermore, the benefits of independent update outweigh the costs of sharing for a workload with substantial sharing. A trace-based simulation shows that a modest reconciliation interval of 15 seconds can eliminate 98% of all stale accesses. Furthermore, our traced clients could collectively expect availability of five nines, even with deferred propagation of updates.
Formations of vehicles in cyclic pursuit Abstract—Inspired by the so-called “bugs” problem from mathematics, we study the geometric formations of multivehicle systems under cyclic pursuit. First, we introduce the notion of cyclic pursuit by examining a system of identical linear agents in the plane. This idea is then extended to a system of wheeled vehicles, each subject to a single nonholonomic constraint (i.e., unicycles), which is the principal focus of this paper. The pursuit framework is particularly simple in that the,identical vehicles are ordered such that vehicle pursues vehicle modulo . In this paper, we assume each vehicle has the same constant forward speed. We show that the system’s equilibrium formations are generalized regular polygons and it is exposed how the multivehicle system’s global behavior can be shaped through appropriate controller gain assignments. We then study the local stability of these equilibrium polygons, revealing which formations are stable and which are not. Index Terms—Circulant matrices, cooperative control, multia-
The Performance of Parity Placements in Disk Arrays Due to recent advances in central processing unit (CPU) and memory system performance, input/output (I/O) systems are increasingly limiting the performance of modern computer systems. Redundant arrays of inexpensive disks (RAID) have been proposed to meet the impending I/O crisis. RAIDs substitute many small inexpensive disks for a few large expensive disks to provide higher performance, smaller footprints, and lower power consumption at a lower cost than the large expensive disks they replace. RAIDs provide high availability by using parity encoding of data to survive disk failures. It is shown that the way parity is distributed in a RAID has significant consequences for performance. The performances of eight different parity placements are investigated using simulation.
Wsben: A Web Services Discovery And Composition Benchmark Toolkit In this article, a novel benchmark toolkit, WSBen, for testing web services discovery and composition algorithms is presented. The WSBen includes: (1) a collection of synthetically generated web services files in WSDL format with diverse data and model characteristics; (2) queries for testing discovery and composition algorithms; (3) auxiliary files to do statistical analysis on the WSDL test sets; (4) converted WSDL test sets that conventional AI planners can read; and (5) a graphical interface to control all these behaviors. Users can fine-tune the generated WSDL test files by varying underlying network models. To illustrate the application of the WSBen, in addition, we present case studies from three domains: (1) web service composition; (2) AI planning; and (3) the laws of networks in Physics community. It is our hope that WSBen will provide useful insights in evaluating the performance of web services discovery and composition algorithms. The WSBen toolkit is available at: http://pike.psu.edu/sw/wsben/.
Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture.
1.2
0.2
0.2
0.2
0.066667
0.028571
0
0
0
0
0
0
0
0
A Formal Assessment Result for Fluent Calculus Using the Action Description Language Ak Systematic approaches like the family of Action De- scription Languages have been designed for the formal assessments of action calculi. We assess the uent cal- culus for knowledge and sensing with the help of the recently developed, high-level action language . As the main result, we present a provably correct embed- ding of this language into uent calculus. As a spin- off, the action programming language FLUX, which is based on uent calculus, provides a system for answer- ing queries to domains. Conversely, the action de- scription language may serve as a high-level surface lan- guage for specifying action domains in FLUX.
Approximate postdictive reasoning with answer set programming We present an answer set programming realization of the h-approximation ( HPX ) theory 8 as an efficient and provably sound reasoning method for epistemic planning and projection problems that involve postdictive reasoning. The efficiency of HPX stems from an approximate knowledge state representation that involves only a linear number of state variables, as compared to an exponential number for theories that utilize a possible-worlds based semantics. This causes a relatively low computational complexity, i.e, the planning problem is in NP under reasonable restrictions, at the cost that HPX is incomplete. In this paper, we use the implementation of HPX to investigate the incompleteness issue and present an empirical evaluation of the solvable fragment and its performance. We find that the solvable fragment of HPX is indeed reasonable and fairly large: in average about 85% of the considered projection problem instances can be solved, compared to a PWS -based approach with exponential complexity as baseline. In addition to the empirical results, we demonstrate the manner in which HPX can be applied in a real robotic control task within a smart home, where our scenario illustrates the usefulness of postdictive reasoning to achieve error-tolerance by abnormality detection in a high-level decision-making task.
Approximate Epistemic Planning with Postdiction as Answer-Set Programming. We propose a history-based approximation of the Possible Worlds Semantics (PWS) for reasoning about knowledge and action. A respective planning system is implemented by a transformation of the problem domain to an Answer-Set Program. The novelty of our approach is elaboration tolerant support for postdiction under the condition that the plan existence problem is still solvable in NP, as compared to Sigma(P)(2) for non-approximated PWS of Son and Baral [20]. We demonstrate our planner with standard problems and present its integration in a cognitive robotics framework for high-level control in a smart home.
Narrative based Postdictive Reasoning for Cognitive Robotics. Making sense of incomplete and conflicting narrative knowledge in the presence of abnormalities, unobservable processes, and other real world considerations is a challenge and crucial requirement for cognitive robotics systems. An added challenge, even when suitably specialised action languages and reasoning systems exist, is practical integration and application within large-scale robot control frameworks. In the backdrop of an autonomous wheelchair robot control task, we report on application-driven work to realise postdiction triggered abnormality detection and re-planning for real-time robot control: (a) Narrative-based knowledge about the environment is obtained via a larger smart environment framework; and (b) abnormalities are postdicted from stable-models of an answer-set program corresponding to the robot's epistemic model. The overall reasoning is performed in the context of an approximate epistemic action theory based planner implemented via a translation to answer-set programming.
Weak, strong, and strong cyclic planning via symbolic model checking Planning in nondeterministic domains yields both conceptual and practical difficulties. From the conceptual point of view, different notions of planning problems can be devised: for instance, a plan might either guarantee goal achievement, or just have some chances of success. From the practical point of view, the problem is to devise algorithms that can effectively deal with large state spaces. In this paper, we tackle planning in nondeterministic domains by addressing conceptual and practical problems. We formally characterize different planning problems, where solutions have a chance of success ("weak planning"), are guaranteed to achieve the goal ("strong planning"), or achieve; the goal with iterative trial-and-error strategies ("strong cyclic planning"). In strong cyclic planning, all the executions associated with the solution plan always have a possibility of terminating and, when they do, they are guaranteed to achieve the goal. We present planning algorithms for these problem classes, and prove that they are correct and complete. We implement the algorithms in the MBP planner by using symbolic model checking techniques. We show that our approach is practical with an extensive experimental evaluation: MBP compares positively with state-of-the-art planners, both in terms of expressiveness and in terms of performance.
h-approximation: History-Based Approximation of Possible World Semantics as ASP We propose an approximation of the Possible Worlds Semantics (PWS) for action planning. A corresponding planning system is implemented by a transformation of the action specification to an Answer-Set Program. A novelty is support for postdiction wrt. (a) the plan existence problem in our framework can be solved in NP, as compared to $\Sigma_2^P$ for non-approximated PWS of Baral(2000); and (b) the planner generates optimal plans wrt. a minimal number of actions in $\Delta_2^P$. We demo the planning system with standard problems, and illustrate its integration in a larger software framework for robot control in a smart home.
What is planning in the presence of sensing? Despite the existence of programs that are able to generate so-called conditional plans, there has yet to emerge a clear and general specification of what it is these programs are looking for: what exactly is a plan in this setting, and when is it correct? In this paper, we develop and motivate a specification within the situation calculus of conditional and iterative plans over domains that include binary sensing actions. The account is built on an existing theory of action which includes a solution to the frame problem, and an extension to it that handles sensing actions and the effect they have on the knowledge of a robot. Plans are taken to be programs in a new simple robot program language, and the planning task is to find a program that would be known by the robot at the outset to lead to a final situation where the goal is satisfied. This specification is used to analyze the correctness of a small example plan, as well as variants that have redundant or missing sensing actions. We also investigate whether the proposed robot program language is powerful enough to serve for any intuitively achievable goal.
Planning as refinement search: a unified framework for evaluating design tradeoffs in partial-order planning Despite the long history of classical planning, there has be en very little comparative analysis of the performance tradeoffs offered by the multit ude of existing planning al- gorithms. This is partly due to the many different vocabularies within which planning algorithms are usually expressed. In this paper we show that refinement search provides a unifying framework within which various planning algorithms can be cast and compared. Specifically, we will develop refinement search semantics for planning, provide a gener- alized algorithm for refinement planning, and show that planners that search in the space of (partial) plans are specific instantiations of this algo rithm. The different design choices in partial order planning correspond to the different ways o f instantiating the generalized algorithm. We will analyze how these choices affect the search-space size and refinement cost of the resultant planner, and show that in most cases they trade one for the other. Finally, we will concentrate on two specific design choices, viz., protection strategies and tractability refinements, and develop some hypotheses regarding the effect of these choices on the performance on practical problems. We will support these hypotheses with a series of focused empirical studies.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Refinement Planning as a Unifying Framework for Plan Synthesis
Learning to Take Actions We formalize a model for supervised learning ofaction strategies in dynamic stochastic domains and show that PAC-learning results on Occam algorithms hold in this model as well. We then identify a class of rule-based action strategies for which polynomial time learning is possible. The representation of strategies is a generalization of decision lists&semi;strategies include rules with existentially quantified conditions,simple recursive predicates, and small internal state,but are syntactically restricted.We also study the learnability of hierarchically composed strategies wherea subroutine already acquired can be used as a basic action in a higherlevel strategy. We prove some positive results in this setting,but also show that in some cases the hierarchical learning problem is computationally hard.
Multimedia answering: enriching text QA with media information Existing community question-answering forums usually provide only textual answers. However, for many questions, pure texts cannot provide intuitive information, while image or video contents are more appropriate. In this paper, we introduce a scheme that is able to enrich text answers with image and video information. Our scheme investigates a rich set of techniques including question/answer classification, query generation, image and video search reranking, etc. Given a question and the community-contributed answer, our approach is able to determine which type of media information should be added, and then automatically collects data from Internet to enrich the textual answer. Different from some efforts that attempt to directly answer questions with image and video data, our approach is built based on the community-contributed textual answers and thus it is more feasible and able to deal with more complex questions. We have conducted empirical study on more than 3,000 QA pairs and the results demonstrate the effectiveness of our approach.
Global reinforcement learning in neural networks. In this letter, we have found a more general formulation of the REward Increment = Nonnegative Factor x Offset Reinforcement x Characteristic Eligibility (REINFORCE) learning principle first suggested by Williams. The new formulation has enabled us to apply the principle to global reinforcement learning in networks with various sources of randomness, and to suggest several simple local rules for such networks. Numerical simulations have shown that for simple classification and reinforcement learning tasks, at least one family of the new learning rules gives results comparable to those provided by the famous Rules A(r-i) and A(r-p) for the Boltzmann machines.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.1124
0.1248
0.0624
0.019671
0.004427
0.001299
0.000131
0.000022
0.000005
0
0
0
0
0
Deep Gender Classification and Visualization of Near-Infra-Red Periocular-Iris images In this paper, we present an approach of automatic pixels feature extraction for Gender Classification using Near-Infra-Red Periocular iris images with Deep learning. Previous works on gender-from-iris have been tried to find manually the best feature extraction methods to represent the gender information of the iris texture from normalized and encoded images. The application of Soft Biometrics with Deep Learning from NIR Periocular-iris-images is a new topic due to the small number of gender labeled images available. In this work, we used bottleneck, fine-tuning and Convolutional Neural Network (CNN) trained from scratch approaches, to identify the most relevant areas on periocular iris images. Training a CNN from scratch with a small number of images using the Data Augmentation technique reached the best classification rate and automatically found the most relevant areas for this task. We concluded that training a model from scratch even with a small number of layers, performed better than using a pre-trained powerful model such as VGG and Resnet in this kind of problems. The best result reached from our CNN trained from scratch was 85.48% of accuracy for gender classification.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Building Program Vector Representations for Deep Learning. Deep learning has made significant breakthroughs in various fields of artificial intelligence. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the \"coding criterion\" to build program vector representations, which are the premise of deep learning for program analysis. We evaluate the learned vector representations both qualitatively and quantitatively. We conclude, based on the experiments, the coding criterion is successful in building program representations. To evaluate whether deep learning is beneficial for program analysis, we feed the representations to deep neural networks, and achieve higher accuracy in the program classification task than \"shallow\" methods. This result confirms the feasibility of deep learning to analyze programs.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning. Deep Learning has recently become hugely popular in machine learning for its ability to solve end-to-end learning systems, in which the features and the classifiers are learned simultaneously, providing significant improvements in classification accuracy in the presence of highly-structured and large databases. Its success is due to a combination of recent algorithmic breakthroughs, increasingly powerful computers, and access to significant amounts of data. Researchers have also considered privacy implications of deep learning. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users' private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy (DP) to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS'15. Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data). Interestingly, we show that record-level differential privacy applied to the shared parameters of the model, as suggested in previous work, is ineffective (i.e., record-level DP is not designed to address our attack).
Distributed data mining based on deep neural network for wireless sensor network AbstractAs the sample data of wireless sensor network (WSN) has increased rapidly with more andmore sensors, a centralized data mining solution in a fusion center has encountered the challenges of reducing the fusion center's calculating load and saving the WSN's transmitting power consumption. Rising to these challenges, this paper proposes a distributed data mining method based on deep neural network (DNN), by dividing the deep neural network into different layers and putting them into sensors. By the proposed solution, the distributed data mining calculating units in WSN share much of fusion center's calculating burden. And the power consumption of transmitting the data processed by DNN is much less than transmitting the raw data. Also, a fault detection scenario is built to verify the validity of this method. Results show that the detection rate is 99%, andWSN shares 64.06% of the data mining calculating task with 58.31% reduction of power consumption.
Neural Network Based Transmit Power Control And Interference Cancellation For Mimo Small Cell Networks The random deployment of small cell base stations (BSs) causes the coverage areas of neighboring cells to overlap, which increases intercell interference and degrades the system capacity. This paper proposes a new intercell interference management (IIM) scheme to improve the system capacity in multiple-input multiple-output (MIMO) small cell networks. The proposed IIM scheme consists of both an interference cancellation (IC) technique on the receiver side, and a neural network (NN) based power control algorithm for intercell interference coordination (ICIC) on the transmitter side. In order to improve the system capacity, the NN power control optimizes downlink transmit power while IC eliminates interfering signals from received signals. Computer simulations compare the system capacity of the MIMO network with several ICIC algorithms: the NN, the greedy search, the belief propagation (BP), the distributed pricing (DP), and the maximum power, all of which can be combined with IC reception. Furthermore, this paper investigates the application of a multi-layered NN structure called deep learning and its pre-training scheme, into the mobile communication field. It is shown that the performance of NN is better than that of BP and very close to that of greedy search. The low complexity of the NN algorithm makes it suitable for IIM. It is also demonstrated that combining IC and sectorization of BSs acquires high capacity gain owing to reduced interference.
SpotGarbage: smartphone app to detect garbage using deep learning. Maintaining a clean and hygienic civic environment is an indispensable yet formidable task, especially in developing countries. With the aim of engaging citizens to track and report on their neighborhoods, this paper presents a novel smartphone app, called SpotGarbage, which detects and coarsely segments garbage regions in a user-clicked geo-tagged image. The app utilizes the proposed deep architecture of fully convolutional networks for detecting garbage in images. The model has been trained on a newly introduced Garbage In Images (GINI) dataset, achieving a mean accuracy of 87.69%. The paper also proposes optimizations in the network architecture resulting in a reduction of 87.9% in memory usage and 96.8% in prediction time with no loss in accuracy, facilitating its usage in resource constrained smartphones.
Stochastic Ratio Matching of RBMs for Sparse High-Dimensional Inputs. Sparse high-dimensional data vectors are common in many application domains where a very large number of rarely non-zero features can be devised. Unfortunately, this creates a computational bottleneck for unsupervised feature learning algorithms such as those based on auto-encoders and RBMs, because they involve a reconstruction step where the whole input vector is predicted from the current feature values. An algorithm was recently developed to successfully handle the case of auto-encoders, based on an importance sampling scheme stochastically selecting which input elements to actually reconstruct during training for each particular example. To generalize this idea to RBMs, we propose a stochastic ratio-matching algorithm that inherits all the computational advantages and unbiasedness of the importance sampling scheme. We show that stochastic ratio matching is a good estimator, allowing the approach to beat the state-of-the-art on two bag-of-word text classification benchmarks (20 Newsgroups and RCV1), while keeping computational cost linear in the number of non-zeros.
Links between perceptrons, MLPs and SVMs We propose to study links between three important classification algorithms: Perceptrons, Multi-Layer Perceptrons (MLPs) and Support Vector Machines (SVMs). We first study ways to control the capacity of Perceptrons (mainly regularization parameters and early stopping), using the margin idea introduced with SVMs. After showing that under simple conditions a Perceptron is equivalent to an SVM, we show it can be computationally expensive in time to train an SVM (and thus a Perceptron) with stochastic gradient descent, mainly because of the margin maximization term in the cost function. We then show that if we remove this margin maximization term, the learning rate or the use of early stopping can still control the margin. These ideas are extended afterward to the case of MLPs. Moreover, under some assumptions it also appears that MLPs are a kind of mixture of SVMs, maximizing the margin in the hidden layer space. Finally, we present a very simple MLP based on the previous findings, which yields better performances in generalization and speed than the other models.
Action-Conditional Video Prediction using Deep Networks in Atari Games Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future image-frames depend on control variables or actions as well as previous frames. While not composed of natural scenes, frames in Atari games are high-dimensional in size, can involve tens of objects with one or more objects being controlled by the actions directly and many other objects being influenced indirectly, can involve entry and departure of objects, and can involve deep partial observability. We propose and evaluate two deep neural network architectures that consist of encoding, action-conditional transformation, and decoding layers based on convolutional neural networks and recurrent neural networks. Experimental results show that the proposed architectures are able to generate visually-realistic frames that are also useful for control over approximately 100-step action-conditional futures in some games. To the best of our knowledge, this paper is the first to make and evaluate long-term predictions on high-dimensional video conditioned by control inputs.
How to Construct Deep Recurrent Neural Networks. In this paper, we explore different ways to extend a recurrent neural network (RNN) to a \textit{deep} RNN. We start by arguing that the concept of depth in an RNN is not as clear as it is in feedforward neural networks. By carefully analyzing and understanding the architecture of an RNN, however, we find three points of an RNN which may be made deeper; (1) input-to-hidden function, (2) hidden-to-hidden transition and (3) hidden-to-output function. Based on this observation, we propose two novel architectures of a deep RNN which are orthogonal to an earlier attempt of stacking multiple recurrent layers to build a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996). We provide an alternative interpretation of these deep RNNs using a novel framework based on neural operators. The proposed deep RNNs are empirically evaluated on the tasks of polyphonic music prediction and language modeling. The experimental result supports our claim that the proposed deep RNNs benefit from the depth and outperform the conventional, shallow RNNs.
Learning Overcomplete Representations In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.
LIBSVM: A library for support vector machines LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Wishful search: interactive composition of data mashups With the emergence of Yahoo Pipes and several similar services, data mashup tools have started to gain interest of business users. Making these tools simple and accessible ton users with no or little programming experience has become a pressing issue. In this paper we introduce MARIO (Mashup Automation with Runtime Orchestration and Invocation), a new tool that radically simplifies data mashup composition. We have developed an intelligent automatic composition engine in MARIO together with a simple user interface using an intuitive "wishful search" abstraction. It thus allows users to explore the space of potentially composable data mashups and preview composition results as they iteratively refine their "wishes", i.e. mashup composition goals. It also lets users discover and make use of system capabilities without having to understand the capabilities of individual components, and instantly reflects changes made to the components by presenting an aggregate view of changed capabilities of the entire system. We describe our experience with using MARIO to compose flows of Yahoo Pipes components.
A file is not a file: understanding the I/O behavior of Apple desktop applications We analyze the I/O behavior of iBench, a new collection of productivity and multimedia application workloads. Our analysis reveals a number of differences between iBench and typical file-system workload studies, including the complex organization of modern files, the lack of pure sequential access, the influence of underlying frameworks on I/O patterns, the widespread use of file synchronization and atomic operations, and the prevalence of threads. Our results have strong ramifications for the design of next generation local and cloud-based storage systems.
Proof Systems for Effectively Propositional Logic We consider proof systems for effectively propositional logic. First, we show that propositional resolution for effectively propositional logic may have exponentially longer refutations than resolution for this logic. This shows that methods based on ground instantiation may be weaker than non-ground methods. Second, we introduce a generalisation rule for effectively propositional logic and show that resolution for this logic may have exponentially longer proofs than resolution with generalisation. We also discuss some related questions, such as sort assignments for generalisation.
Secure the Cloud: From the Perspective of a Service-Oriented Organization In response to the revival of virtualized technology by Rosenblum and Garfinkel [2005], NIST defined cloud computing, a new paradigm in service computing infrastructures. In cloud environments, the basic security mechanism is ingrained in virtualization—that is, the execution of instructions at different privilege levels. Despite its obvious benefits, the caveat is that a crashed virtual machine (VM) is much harder to recover than a crashed workstation. When crashed, a VM is nothing but a giant corrupt binary file and quite unrecoverable by standard disk-based forensics. Therefore, VM crashes should be avoided at all costs. Security is one of the major contributors to such VM crashes. This includes compromising the hypervisor, cloud storage, images of VMs used infrequently, and remote cloud client used by the customer as well as threat from malicious insiders. Although using secure infrastructures such as private clouds alleviate several of these security problems, most cloud users end up using cheaper options such as third-party infrastructures (i.e., private clouds), thus a thorough discussion of all known security issues is pertinent. Hence, in this article, we discuss ongoing research in cloud security in order of the attack scenarios exploited most often in the cloud environment. We explore attack scenarios that call for securing the hypervisor, exploiting co-residency of VMs, VM image management, mitigating insider threats, securing storage in clouds, abusing lightweight software-as-a-service clients, and protecting data propagation in clouds. Wearing a practitioner's glasses, we explore the relevance of each attack scenario to a service company like Infosys. At the same time, we draw parallels between cloud security research and implementation of security solutions in the form of enterprise security suites for the cloud. We discuss the state of practice in the form of enterprise security suites that include cryptographic solutions, access control policies in the cloud, new techniques for attack detection, and security quality assurance in clouds.
1.0525
0.06
0.06
0.06
0.03
0.010001
0.000833
0.000013
0.000001
0
0
0
0
0
Trie-based apriori motif discovery approach One of the hardest and long-standing problems in Bioinformatics is the problem of motif discovery in biological sequences. It is the problem of finding recurring patterns in these sequences. Apriori is a well-known data mining algorithm. It is used to mine frequent patterns in large datasets. In this paper, we would like to apply Apriori to the common motifs discovery problem. We propose three modifications so that we can adapt the classic Apriori to our problem. First, the Trie data structure is used to store all biological sequences under examination. Second, both of the frequent pattern extraction and the candidate generation steps are done using the same data structure, the Trie . The Trie allows to simultaneously search all possible starting points in the sequence for any occurrence of the given pattern. Third, instead of using only the support as a measure to assess frequent patterns, a new measure, the normalized information content (normIC), is proposed which is able to distinguish motifs in real promoter sequences. Preliminary experiments are conducted on Tompa's benchmark to investigate the performance of our proposed algorithm, the Trie-based Apriori Motif Discovery (TrieAMD). Results show that our algorithm outperforms all of the tested tools on real datasets for average sensitivity.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Reasoning about actions with loops via Hoare logic. Plans with loops are more general and compact than classical sequential plans, and gaining increasing attentions in artificial intelligence (AI). While many existing approaches mainly focus on algorithmic issues, few work has been devoted to the semantic foundations on planning with loops. In this paper, we first develop a tailored action language AL K, together with two semantics for handling domains with non-deterministic actions and loops. Then we propose a sound and (relative) complete Hoare-style proof system for efficient plan generation and verification under 0-approximation semantics, which uses the so-called idea offline planning and on-line querying strategy in knowledge compilation, i.e., the agent could generate and store short proofs as many as possible in the spare time, and then perform quick query by constructing a long proof from the stored shorter proofs using compositional rule. We argue that both our semantics and proof system could serve as logical foundations for reasoning about actions with loops.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Ensemble Classifier based approach for Code-Mixed Cross-Script Question Classification.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Semantic Approach for Schema Evolution and Versioning in Object-Oriented Databases In this paper a semantic approach for the specification and the manage- ment of databases with evolving schemata is introduced. It is shown how a general object-oriented model for schema versioning and evolution can be formalized; how the semantics of schema change operations can be defined; how interesting reasoning tasks can be supported, based on an encoding in description logics.
Computational Logic - CL 2000, First International Conference, London, UK, 24-28 July, 2000, Proceedings
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Logic programs with classical negation
The well-founded semantics for general logic programs A general logic program (abbreviated to “program” hereafter) is a set of roles that have both positive and negative subgoals. It is common to view a deductive database as a general logic program consisting of rules (IDB) slttmg above elementary relations (EDB, facts). It is desirable to associate one Herbrand model with a program and think of that model as the “meaning of the program, ” or Its“declarative semantics. ” Ideally, queries directed to the program would be answered in accordance with this model. Recent research indicates that some programs do not have a “satisfactory” total model; for such programs, the question of an appropriate partial model arises. Unfounded sets and well-founded partial models are introduced and the well-founded semantics of a program are defined to be its well-founded partial model. If the well-founded partial model is m fact a total model. it is called the well-founded model. It n shown that the class of programs possessing a total well-founded model properly includes previously studied classes of “stratified” and “locally stratified” programs,The method in this paper is also compared with other proposals in the literature, including Clark’s“program completion, ” Fitting’s and Kunen’s 3-vahred interpretations of it, and the “stable models”of Gelfond and Lifschitz.
Solving Advanced Reasoning Tasks Using Quantified Boolean Formulas We consider the compilation of different reasoning tasks into the evaluation problem of quantified boolean formulas (QBFs) as an approach to develop prototype reasoning sys- tems useful for, e.g., experimental purposes. Such a method is a natural generalization of a similar technique applied to NP-problems and has been recently proposed by other re- searchers. More specifically, we present translations of sev- eral well-known reasoning tasks from the area of nonmono- tonic reasoning into QBFs, and compare their implementa- tion in the prototype system QUIP with established NMR- provers. The results show reasonable performance, and docu- ment that the QBF approach is an attractive tool for rapid pro- totyping of experimental knowledge-representation systems.
Object Recognition from Local Scale-Invariant Features An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection.These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales.The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds.
Support-Vector Networks The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Improving the I/O Performance of Real-Time Database Systems with Multiple-Disk Storage Structures
Simultaneous Localization And Mapping With Sparse Extended Information Filters In this paper we describe a scalable algorithm for the simultaneous mapping and localization (SLAM) problem. SLAM is the problem of acquiring a map of a static environment with a mobile robot. The vast majority of SLAM algorithms are based on the extended Kahnan filter (EKF), In this paper we advocate an algorithm that relies on the dual of the EKE the extended information filter (EIF). We show that when represented in the information form, map posteriors are dominated by a small number of links that tie together nearby features in the map. This insight is developed into a sparse variant of the EIF called the sparse extended information filter (SEIF). SEIFs represent maps by graphical networks of features that are locally interconnected, where links represent relative information between pairs of nearby,features, as well as information about the robot's pose relative to the map. We show that all essential update equations in SEIFs can be executed in constant time, irrespective of the size of the map. We also provide empirical results obtained for a benchmark data set collected in an outdoor environment, and using a multi-robot mapping simulation.
A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system.
iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings.
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2496
0.002713
0
0
0
0
0
0
0
0
0
0
0
0
Model ensemble tools for self-management in data centers We introduce Ensemble, a runtime framework and associated tools for building query latency models on-the-fly. These dynamic performance models can be used to support complex, highly dimensional resource allocation, and/or what-if performance inquiry in modern database environments, such as data centers and Clouds. Ensemble combines simple, partially specified, lower-dimensionality models to provide good initial approximations for higher dimensionality, end-to-end query latency models. We perform an experimental evaluation on industry-standard applications running on a multi-tier dynamic content server. We show that the Ensemble on-the-fly modeling framework provides accurate, fast and flexible performance modelling by using partial, lower dimensionality models to approximate end-to-end query latency models.
Dynamic partitioning of the cache hierarchy in shared data centers Due to the imperative need to reduce the management costs of large data centers, operators multiplex several concurrent database applications on a server farm connected to shared network attached storage. Determining and enforcing per-application resource quotas in the resulting cache hierarchy, on the fly, poses a complex resource allocation problem spanning the database server and the storage server tiers. This problem is further complicated by the need to provide strict Quality of Service (QoS) guarantees to hosted applications. In this paper, we design and implement a novel coordinated partitioning technique of the database buffer pool and storage cache between applications for any given cache replacement policy and per-application access pattern. We use statistical regression to dynamically determine the mapping between cache quota settings and the resulting per-application QoS. A resource controller embedded within the database engine actuates the partitioning of the two-level cache, converging towards the configuration with maximum application utility, expressed as the service provider revenue in that configuration, based on a set of latency sample points. Our experimental evaluation, using the MySQL database engine, a server farm with consolidated storage, and two e-commerce benchmarks, shows the effectiveness of our technique in enforcing application QoS, as well as maximizing the revenue of the service provider in shared server farms.
Context-aware prefetching at the storage server In many of today's applications, access to storage constitutes the major cost of processing a user request. Data prefetching has been used to alleviate the storage access latency. Under current prefetching techniques, the storage system prefetches a batch of blocks upon detecting an access pattern. However, the high level of concurrency in today's applications typically leads to interleaved block accesses, which makes detecting an access pattern a very challenging problem. Towards this, we propose and evaluate QuickMine, a novel, lightweight and minimally intrusive method for contextaware prefetching. Under QuickMine, we capture application contexts, such as a transaction or query, and leverage them for context-aware prediction and improved prefetching effectiveness in the storage cache. We implement a prototype of our context-aware prefetching algorithm in a storage-area network (SAN) built using Network Block Device (NBD). Our prototype shows that context-aware prefetching clearly out-performs existing context-oblivious prefetching algorithms, resulting in factors of up to 2 improvements in application latency for two e-commerce workloads with repeatable access patterns, TPC-W and RUBiS.
Extended stable semantics for normal and disjunctive programs
The nature of statistical learning theory~. First Page of the Article
A machine program for theorem-proving The programming of a proof procedure is discussed in connection with trial runs and possible improvements.
An Introduction to Least Commitment Planning Recent developments have clarified the process of generating partially ordered, partially specified sequences of actions whose execution will achieve an agent's goal. This article summarizes a progression of least commitment planners, starting with one that handles the simple STRIPS representation and ending with UCOPOP a planner that manages actions with disjunctive precondition, conditional effects, and universal quantification over dynamic universes. Along the way, I explain how Chapman's formulation of the modal truth criterion is misleading and why his NP-completeness result for reasoning about plans with conditional effects does not apply to UCOPOP.
Equilibria and steering laws for planar formations This paper presents a Lie group setting for the problem of control of formations, as a natural outcome of the analysis of a planar two-vehicle formation control law. The vehicle trajectories are described using the planar Frenet–Serret equations of motion, which capture the evolution of both the vehicle position and orientation for unit-speed motion subject to curvature (steering) control. The set of all possible (relative) equilibria for arbitrary G-invariant curvature controls is described (where G=SE(2) is a symmetry group for the control law), and a global convergence result for the two-vehicle control law is proved. An n-vehicle generalization of the two-vehicle control law is also presented, and the corresponding (relative) equilibria for the n-vehicle problem are characterized. Work is on-going to discover stability and convergence results for the n-vehicle problem.
Pushing the envelope: planning, propositional logic, and stochastic search Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning problems many times faster than the best current planning systems. Although stochastic methods have been shown to be very effective on a wide range of scheduling problems, this is the first demonstration of its power on truly challenging classical planning instances. This work also provides a new perspective on representational issues in planning.
A Completeness Result for SLDNF-Resolution Because of the possibility of floundering and infinite derivations, SLDNF-resolution is, in general, not complete. The classical approach [17] to get a completeness result is to restrict the attention to normal programs P and normal goals G, such that P or {G} is allowed and P is hierarchical. Unfortunately, the class of all normal programs and all normal goals meeting these requirements is not powerful enough to be of great practical importance. But after refining the concept of allowedness by taking modes [12] into account, we can broaden the notion of a hierarchical program, and thereby define a subclass of the class of normal programs and normal goals which is powerful enough to compute all primitive recursive functions without losing the completeness of SLDNF-resolution.
Diagnostic reasoning with A-Prolog In this paper, we suggest an architecture for a software agent which operates a physical device and is capable of making observations and of testing and repairing the device's components. We present simplified definitions of the notions of symptom, candidate diagnosis, and diagnosis which are based on the theory of action language ${\cal AL}$. The definitions allow one to give a simple account of the agent's behavior in which many of the agent's tasks are reduced to computing stable models of logic programs.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.022222
0.008333
0
0
0
0
0
0
0
0
0
0
0
Hints for Computer System Design Experience with the design and implementation of a number of computer systems, and study of many other systems, has led to some general hints for system design which are described here. They are illustrated by a number of examples, ranging from hardware such as the Alto and the Dorado to applications programs such as Bravo and Star.
Latency management in storage systems Storage Latency Estimation Descriptors, or SLEDs, are an API that allow applications to understand and take advantage of the dynamic state of a storage system. By accessing data in the file system cache or high-speed storage first, total I/O workloads can be reduced and performance improved. SLEDs report estimated data latency, allowing users, system utilities, and scripts to make file access decisions based on those retrieval time estimates. SLEDs thus can be used to improve individual application performance, reduce system workloads, and improve the user experience with more predictable behavior. We have modified the Linux 2.2 kernel to support SLEDs, and several Unix utilities and astronomical applications have been modified to use them. As a result, execution times of the Unix utilities when data file sizes exceed the size of the file system buffer cache have been reduced from 50% up to more than an order of magnitude. The astronomical applications incurred 30-50% fewer page faults and reductions in execution time of 10-35%. Performance of applications which use SLEDs also degrade more gracefully as data file size grows.
The Design Of A Capability-Based Distributed Operating System
Robust, Portable I/O Scheduling With The Disk Mimic We propose a new approach for I/O scheduling that performs on-line simulation of the. underlying disk. When simulation is integrated within a system, three key challenges must be addressed: first, the simulator must be portable across the full range of devices; second, all configuration must be automatic; third, the computation and memory overheads must be low. Our simulator, the Disk Mimic, achieves these goals by building a table-based model of the disk as it observes the times for previous requests. We show that a shortest-mimicked-time-first (SMTF) scheduler performs nearly as well as an approach with perfect knowledge of the underlying device and that it is superior to traditional scheduling algorithms such as C-LOOK and SSTF; our results hold as the seek and rotational characteristics of the disk are varied.
Detour: Informed Internet Routing and Transport Despite its obvious success, robustness, and scalability, the Internet suffers from a number of end-to-end performance and availability problems. In this paper, we attempt to quantify the Internet's inefficiencies and then we argue that Internet behavior can be improved by spreading intelligent routers at key access and interchange points to actively manage traffic. Our Detour prototype aims to demonstrate practical benefits to end users, without penalizing non-Detour users, by aggregating traffic information across connections and using more efficient routes to improve Internet performance.
Some Fault-Tolerant Aspects of the Chorus Distributed System
Exploiting the non-determinism and asynchrony of set iterators to reduce aggregate file I/O latency A key goal of distributed systems is to provide prompt access to shared information repositories. The high latency of remote access is a serious impediment to this goal. This paper describes a new file system abstraction called dynamic sets - unordered collections created by an application to hold the files it intends to process. Applications that iterate on the set to access its members allow the system to reduce the aggregateU0 Iatency by exploiting the non-determinism and asychrony inherent in the semantics of set iterators. This reduction in latency comes without relying on reference locality, without modifying DFS servers and protocols, and without unduly complicating the programming model. This paperpresents this abstraction and describes an implementation of it that runs on local and distributedfile systems, as well as the World wide Web. Dynamicsets demonstrate substantial performance gains - up to 50% savings in runtbne for search on NFS, and up to 90% reduction in I/O latency for Web searches.
Regeneration of replicated objects: a technique and its Eden implementation A replicated directory system based on a method called regeneration is designed and implemented. The directory system allows selection of arbitrary object to be replicated, choice of the number of replicas for each object, and placement of the copies on machines with independent failure modes. Copies can become inaccessible due to node crashes, but as long as a single copy survives, the replication level is restored by automatically replacing lost copies on other active machines. The focus is on a regeneration algorithm for replica replacement and its application to a replicated directory structure in the Eden local area network. A simple probabilistic approach is used to compare the availability provided by the algorithm to three other replication techniques.
Compiler-based I/O prefetching for out-of-core applications Current operating systems offer poor performance when a numeric application's working set does not fit in main memory. As a result, programmers who wish to solve “out-of-core” problems efficiently are typically faced with the onerous task of rewriting an application to use explicit I/O operations (e.g., read/write). In this paper, we propose and evaluate a fully automatic technique which liberates the programmer from this task, provides high performance, and requires only minimal changes to current operating systems. In our scheme the compiler provides the crucial information on future access patterns without burdening the programmer; the operating system supports nonbinding prefetch and release hints for managing I/O; and the operating systems cooperates with a run-time layer to accelerate performance by adapting to dynamic behavior and minimizing prefetch overhead. This approach maintains the abstraction of unlimited virtual memory for the programmer, gives the compiler the flexibility to aggressively insert prefetches ahead of references, and gives the operating system the flexibility to arbitrate between the competing resource demands of multiple applications. We implemented our compiler analysis within the SUIF compiler, and used it to target implementations of our run-time and OS support on both research and commercial systems (Hurricane and IRIX 6.5, respectively). Our experimental results show large performance gains for out-of-core scientific applications on both systems: more than 50% of the I/O stall time has been eliminated in most cases, thus translating into overall speedups of roughly twofold in many cases.
Object memory and storage management in the Clouds kernel Clouds is a distributed object-based operating system designed to support fault tolerance, location independence, and an action/object programming environment. Some of the key issues in supporting Clouds are the availability of object memory, object location, and object recovery. Object memory provides a set of global, persistent, named address spaces for sorting objects. The address spaces resemble conventional segmentation schemes, but are persistent and thus replace both the computational and storage systems used in conventional schemes by a more powerful paradigm. The object location system provides transparent object operation invocation mechanisms throughout the distributed environment. The object recovery system supports recoverable objects through shadowing and two-phase commit techniques to allow atomicity of actions. The key issues in the design and implementation of the object memory and storage management system are briefly described
Efficient Failure Recovery in Multi-Disk Multimedia Servers In this paper, we present a novel disk failure recovery method that utilizes the inherent redundancy in video streams (rather than error-correcting codes) to ensure that the user-invoked on-the-fly failure recovery process does not impose any additional load on the disk array. We also present a disk array architecture that enhances the scalability of multimedia servers by: (1) integrating the recovery process with the decompression of video streams, and thereby distributing the reconstruction process across the clients; and (2) supporting graceful degradation in the quality of recovered images with increase in the number of disk failures.
Inducing causal laws by regular inference Recent work on representing action and change has introduced high-level action languages which describe the effects of actions as causal laws in a declarative way. In this paper, we propose an algorithm to induce the effects of actions from an incomplete domain description and observations after executing action sequences, all of which are represented in the action language $\mathcal{A}$. Our induction algorithm generates effect propositions in $\mathcal{A}$ based on regular inference, i.e., an algorithm to learn finite automata. As opposed to previous work on learning automata from scratch, we are concerned with explanatory induction which accounts for observations from background knowledge together with induced hypotheses. Compared with previous approaches in ILP, an observation input to our induction algorithm is not restricted to a narrative but can be any fact observed after executing a sequence of actions. As a result, induction of causal laws can be formally characterized within action languages.
Value minimization in circumscription Minimization in circumscription has focussed on minimizing the extent of a set of predicates (with or without priorities among them), or of a formula. Although functions and other constants may be left varying during circumscription, no earlier formalism to the best of our knowledge minimized functions. In this paper we introduce and motivate the notion of value minimizing a function in circumscription. Intuitively, value minimizing a function consists in choosing those models where the value of the function is minimal relative to an ordering on its range. We first give the formulation of value minimization of a single function based on a syntactic transformation and then give a formulation in model-theoretic terms. We then discuss value minimization of a set of functions with and without priorities. We show how Lifschitz's Nested Abnormality Theories can be used to express value minimization, and discuss the prospect of its use for knowledge representation, particularly in formalizing reasoning about actions. 0 1998 Published
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
1.017558
0.010816
0.010551
0.007303
0.006897
0.003765
0.002563
0.001231
0.00018
0.000016
0.000002
0
0
0
Semantic hashing We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and give a much better representation of each document than Latent Semantic Analysis. When the deepest layer is forced to use a small number of binary variables (e.g. 32), the graphical model performs ''semantic hashing'': Documents are mapped to memory addresses in such a way that semantically similar documents are located at nearby addresses. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster than locality sensitive hashing, which is the fastest current method. By using semantic hashing to filter the documents given to TF-IDF, we achieve higher accuracy than applying TF-IDF to the entire document set.
A data-driven study of image feature extraction and fusion Feature analysis is the extraction and comparison of signals from multimedia data, which can subsequently be semantically analyzed. Feature analysis is the foundation of many multimedia computing tasks such as object recognition, image annotation, and multimedia information retrieval. In recent decades, considerable work has been devoted to the research of feature analysis. In this work, we use large-scale datasets to conduct a comparative study of four state-of-the-art, representative feature extraction algorithms: color-texture codebook (CT), SIFT codebook, HMAX, and convolutional networks (ConvNet). Our comparative evaluation demonstrates that different feature extraction algorithms enjoy their own advantages, and excel in different image categories. We provide key observations to explain where these algorithms excel and why. Based on these observations, we recommend feature extraction principles and identify several pitfalls for researchers and practitioners to avoid. Furthermore, we determine that in a large training dataset with more than 10,000 instances per image category, the four evaluated algorithms can converge to the same high level of category-prediction accuracy. This result supports the effectiveness of the data-driven approach. Finally, based on learned clues from each algorithm's confusion matrix, we devise a fusion algorithm to harvest synergies between these four algorithms and further improve class-prediction accuracy.
Multimodal Similarity-Preserving Hashing We introduce an efficient computational framework for hashing data belonging to multiple modalities into a single representation space where they become mutually comparable. The proposed approach is based on a novel coupled siamese neural network architecture and allows unified treatment of intra- and inter-modality similarity learning. Unlike existing cross-modality similarity learning approaches, our hashing functions are not limited to binarized linear projections and can assume arbitrarily complex forms. We show experimentally that our method significantly outperforms state-of-the-art hashing approaches on multimedia retrieval tasks.
Transformation Invariant Autoassociation with Application to Handwritten Character Recognition When training neural networks by the classical backpropagation algo- rithm the whole problem to learn must be expressed by a set of inputs and desired outputs. However, we often have high-level knowledge about the learning problem. In optical character recognition (OCR), for in- stance, we know that the classification should be invariant under a set of transformations like rotation or translation. We propose a new mo dular classification system based on several autoassociative multilayer percep- trons which allows the efficient incorporation of such knowledge. Results are reported on the NIST database of upper case handwritten letters and compared to other approaches to the invariance problem.
Self Supervised Boosting Boosting algorithms and successful applications thereof abound for clas- sification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a ran- dom field model by training them to improve classification performance between the data and an equal-sized sample of "negative examples" gen- erated from the model's current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative examples from the model's current Boltzmann distribution. Next, a fea- ture is trained to improve classification performance between data and negative examples. Finally, a coefficient is learned which determines the importance of this feature relative to ones already in the pool. Negative examples only need to be generated once to learn each new feature. The validity of the approach is demonstrated on binary digits and continuous synthetic data.
Nonlocal estimation of manifold structure. We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation invites an exploration of nonlocal manifold learning algorithms that attempt to discover shared structure in the tangent planes at different positions. A training criterion for such an algorithm is proposed, and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to generalize very far from training data (on learning handwritten character image rotations), where local nonparametric methods fail.
Learning Despite Concept Variation by Finding Structure in Attribute-based Data Learning accuracy depends on concept variation.The accuracy of six learning systems(C4.5, Grove, Greedy, Fringe, LFC andMRP) is compared using a set of forty testconcepts. The selection of these concepts wasguided by the existence of structured conceptsthat appear in difficult real-world domains(such as protein folding). Such conceptsoften have embedded, implicit structure,which may be revealed through explicitrelations. Experiments using these benchmarkconcepts show that...
A Scalable Hierarchical Distributed Language Model Neural probabilistic language models (NPLMs) have been shown to be competi- tive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the non- hierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a wo rd tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models.
Deep Machine Learning - A New Frontier in Artificial Intelligence Research [Research Frontier] This article provides an overview of the mainstream deep learning approaches and research directions proposed over the past decade. It is important to emphasize that each approach has strengths and "weaknesses, depending on the application and context in "which it is being used. Thus, this article presents a summary on the current state of the deep machine learning field and some perspective into how it may evolve. Convolutional Neural Networks (CNNs) and Deep Belief Networks (DBNs) (and their respective variations) are focused on primarily because they are well established in the deep learning field and show great promise for future work.
Parallel networks that learn to pronounce English text Abstract. This paper describes NETtalk, a class of massively-parallel network systems that learn to convert English text to speech. The memory representations for pronunciations are learned by practice and are shared among many processing units. The performance of NETtalk has some similarities with observed human performance. (i) The learning follows a power law. (;i) The more words the network learns, the better it is at generalizing and correctly pronouncing new words, (iii) The performance of the network degrades very slowly as connections in the network are damaged: no single link or processing unit is essential. (iv) Relearning after damage is much faster than learning during the original training. (v) Distributed or spaced prac-tice is more effective for long-term retention than massed practice. Network models can be constructed that have the same perfor-mance and learning characteristics on a particular task, but differ completely at the levels of synaptic strengths and single-unit responses. However, hierarchical clustering techniques applied to NETtalk re-veal that these different networks have similar internal representations of letter-to-sound correspondences within groups of processing units. This suggests that invariant internal representations may be found in assemblies of neurons intermediate in size between highly localized and completely distributed representations.
TextTiling: segmenting text into multi-paragraph subtopic passages TextTiling is a technique for subdividing texts into multi-paragraph units that represent passages, or subtopics. The discourse cues for identifying major subtopic shifts are patterns of lexical co-occurrence and distribution. The algorithm is fully implemented and is shown to produce segmentation that corresponds well to human judgments of the subtopic boundaries of 12 texts. Multi-paragraph subtopic segmentation should be useful for many text analysis tasks, including information retrieval and summarization.
Disk scheduling in a multimedia I/O system This article provides a retrospective of our original paper by the same title in the Proceedings of the First ACM Conference on Multimedia, published in 1993. This article examines the problem of disk scheduling in a multimedia I/O system. In a multimedia server, the disk requests may have constant data rate requirements and need guaranteed service. We propose a new scheduling algorithm, SCAN-EDF, that combines the features of SCAN type of seek optimizing algorithm with an Earliest Deadline First (EDF) type of real-time scheduling algorithm. We compare SCAN-EDF with other scheduling strategies and show that SCAN-EDF combines the best features of both SCAN and EDF. We also investigate the impact of buffer space on the maximum number of video streams that can be supported.We show that by making the deadlines larger than the request periods, a larger number of streams can be supported.We also describe how we extended the SCAN-EDF algorithm in the PRISM multimedia architecture. PRISM is an integrated multimedia server, designed to satisfy the QOS requirements of multiple classes of requests. Our experience in implementing the extended SCAN-EDF algorithm in a generic operating system is discussed and performance metrics and results are presented to illustrate how the SCAN-EDF extensions and implementation strategies have succeeded in meeting the QOS requirements of different classes of requests.
Planning under Incomplete Knowledge We propose a new logic-based planning language, called K. Transitions between states of knowledge can be described in K, and the language is well suited for planning under incomplete knowledge. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, proving to be very flexible. A planning system supporting K is implemented on top of the disjunctive logic programming system DLV. This novel systemallows for solving hard planning problems, including secure planning under incomplete initial states, which cannot be solved at all by other logic-based planning systems such as traditional satisfiability planners.
Editorial introduction to the Neural Networks special issue on Deep Learning of Representations.
1.010644
0.010995
0.01091
0.009144
0.009109
0.004573
0.003061
0.001571
0.000453
0.000019
0
0
0
0
Characterizing and extending answer set semantics using possibility theory. Answer Set Programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, whereas this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
When plans distinguish Bayes nets We consider the complexity of determining whether differing probability distributions for the same Bayes net result in different policies, significantly different policy outcomes or optimal value functions.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On the Use of a Mixed Multiscale Finite Element Method for GreaterFlexibility and Increased Speed or Improved Accuracy in Reservoir Simulation In this paper we propose a modified mixed multiscale finite element method for solving elliptic problems with rough coefficients arising in, e. g., porous media flow. The method is based on the construction of special base functions which adapt to the local property of the differential operator. In particular, the method incorporates the effect of small-scale heterogeneous structures in the elliptic coefficients into the base functions and produces a detailed velocity field that can be used to solve phase transport equations at a subgrid scale. The method is mass conservative and accounts for radial flow in the near-well region without resorting to complicated well models or near-well upscaling procedures. As such, the method provides a step toward a more accurate and rigorous treatment of advanced well architectures in reservoir simulation. The accuracy of the method is demonstrated through a series of three-dimensional incompressible two-phase flow simulations.
Analysis of two-scale finite volume element method for elliptic problem In this paper we propose and analyze a class of finite volume element method for solving a second order elliptic boundary value problem whose solution is defined in more than one length scales. The method has the ability to incorporate the small scale behaviors of the solution on the large scale one. This is achieved through the construction of the basis functions on each element that satisfy the homogeneous elliptic differential equation. Furthermore, the method enjoys numerical conservation feature which is highly desirable in many applications. Existing analyses on its finite element counterpart reveal that there exists a resonance error between the mesh size and the small length scale. This result motivates an oversampling technique to overcome this drawback. We develop an analysis of the proposed method under the assumption that the coefficients are of two scales and periodic in the small scale. The theoretical results are confirmed experimentally by several convergence tests. Moreover, we present an application of the method to flows in porous media.
Treating Highly Anisotropic Subsurface Flow with the Multiscale Finite-Volume Method The multiscale finite-volume (MSFV) method has been designed to solve flow problems on large domains efficiently. First, a set of basis functions, which are local numerical solutions, is employed to construct a fine-scale pressure approximation; then a conservative fine-scale velocity approximation is constructed by solving local problems with boundary conditions obtained from the pressure approximation; finally, transport is solved at the. ne scale. The method proved very robust and accurate for multiphase flow simulations in highly heterogeneous isotropic reservoirs with complex correlation structures. However, it has recently been pointed out that the fine-scale details of the MSFV solutions may be lost in the case of high anisotropy or large grid aspect ratios. This shortcoming is analyzed in this paper, and it is demonstrated that it is caused by the appearance of unphysical "circulation cells." We show that damped-shear boundary conditions for the conservative-velocity problems or linear boundary conditions for the basis-function problems can significantly improve the MSFV solution for highly anisotropic permeability fields without sensitively affecting the solution in the isotropic case.
A mixed multiscale finite element method for elliptic problems with oscillating coefficients The recently introduced multiscale finite element method for solving elliptic equations with oscillating coefficients is designed to capture the large-scale structure of the solutions without resolving all the fine-scale structures. Motivated by the numerical simulation of flow transport in highly heterogeneous porous media, we propose a mixed multiscale finite element method with an over-sampling technique for solving second order elliptic equations with rapidly oscillating coefficients. The multiscale finite element bases are constructed by locally solving Neumann boundary value problems. We provide a detailed convergence analysis of the method under the assumption that the oscillating coefficients are locally periodic. While such a simplifying assumption is not required by our method, it allows us to use homogenization theory to obtain the asymptotic structure of the solutions. Numerical experiments are carried out for flow transport in a porous medium with a random log-normal relative permeability to demonstrate the efficiency and accuracy of the proposed method.
Convergence of a Nonconforming Multiscale Finite Element Method The multiscale finite element method (MsFEM) [T. Y. Hou, X. H. Wu, and Z. Cai, Math. Comp., 1998, to appear; T. Y. Hou and X. H. Wu, J. Comput. Phys., 134 (1997), pp. 169--189] has been introduced to capture the large scale solutions of elliptic equations with highly oscillatory coefficients. This is accomplished by constructing the multiscale base functions from the local solutions of the elliptic operator. Our previous study reveals that the leading order error in this approach is caused by the ``resonant sampling,'' which leads to large error when the mesh size is close to the small scale of the continuous problem. Similar difficulty also arises in numerical upscaling methods. An oversampling technique has been introduced to alleviate this difficulty [T. Y. Hou and X. H. Wu, J. Comput. Phys., 134 (1997), pp. 169--189]. A consequence of the oversampling method is that the resulting finite element method is no longer conforming. Here we give a detailed analysis of the nonconforming error. Our analysis also reveals a new cell resonance error which is caused by the mismatch between the mesh size and the wavelength of the small scale. We show that the cell resonance error is of lower order. Our numerical experiments demonstrate that the cell resonance error is generically small and is difficult to observe in practice.
Adaptive multiscale finite-volume method for nonlinear multiphase transport in heterogeneous formations In the previous multiscale finite-volume (MSFV) method, an efficient and accurate multiscale approach was proposed to solve the elliptic flow equation. The reconstructed fine-scale velocity field was then used to solve the nonlinear hyperbolic transport equation for the fine-scale saturations using an overlapping Schwarz scheme. A coarse-scale system for the transport equations was not derived because of the hyperbolic character of the governing equations and intricate nonlinear interactions between the saturation field and the underlying heterogeneous permeability distribution. In this paper, we describe a sequential implicit multiscale finite-volume framework for coupled flow and transport with general prolongation and restriction operations for both pressure and saturation, in which three adaptive prolongation operators for the saturation are used. In regions with rapid pressure and saturation changes, the original approach, with full reconstruction of the velocity field and overlapping Schwarz, is used to compute the saturations. In regions where the temporal changes in velocity or saturation can be represented by asymptotic linear approximations, two additional approximate prolongation operators are proposed. The efficiency and accuracy are evaluated for two-phase incompressible flow in two- and three-dimensional domains. The new adaptive algorithm is tested using various models with homogeneous and heterogeneous permeabilities. It is demonstrated that the multiscale results with the adaptive transport calculation are in excellent agreement with the fine-scale solutions. Furthermore, the adaptive multiscale scheme of flow and transport is much more computationally efficient compared with the previous MSFV method and conventional fine-scale reservoir simulation methods.
Empirical Analysis of Predictive Algorithms for Collaborative Filtering Collaborative filtering or recommender systemsuse a database about user preferences topredict additional topics or products a newuser might like. In this paper we describe several algorithms designed for this task, including techniques based on correlation coefficients,vector-based similarity calculations,and statistical Bayesian methods. We comparethe predictive accuracy of the various methods in a set of representative problemdomains. We use two basic classes of evaluation...
Temporal data base management Reasoning about time typically involves drawing conclusions on the basis of incomplete information. Uncertainty arises in the form of ignorance, indeterminacy, and indecision. Despite the lack of complete information, a problem solver is continually forced to make predictions in order to pursue hypotheses and plan for the future. Such predictions are frequently contravened by subsequent evidence. This paper presents a computational approach to temporal reasoning that directly confronts these issues. The approach centers around techniques for managing a data base of assertions corresponding to the occurrence of events and the persistence of their effects over time. The resulting computational framework performs the temporal analog of (static) reason maintenance by keeping track of dependency information involving assumptions about the truth of facts spanning various intervals of time. The system described in this paper extends classical predicate-calculus data bases, such as those used by PROLOG, to deal with time in an efficient and natural manner.
Rational Kernels: Theory and Algorithms Many classification algorithms were originally designed for fixed-size vectors. Recent applications in text and speech processing and computational biology require however the analysis of variable-length sequences and more generally weighted automata. An approach widely used in statistical learning techniques such as Support Vector Machines (SVMs) is that of kernel methods, due to their computational efficiency in high-dimensional feature spaces. We introduce a general family of kernels based on weighted transducers or rational relations, rational kernels , that extend kernel methods to the analysis of variable-length sequences or more generally weighted automata. We show that rational kernels can be computed efficiently using a general algorithm of composition of weighted transducers and a general single-source shortest-distance algorithm. Not all rational kernels are positive definite and symmetric (PDS), or equivalently verify the Mercer condition, a condition that guarantees the convergence of training for discriminant classification algorithms such as SVMs. We present several theoretical results related to PDS rational kernels. We show that under some general conditions these kernels are closed under sum, product, or Kleene-closure and give a general method for constructing a PDS rational kernel from an arbitrary transducer defined on some non-idempotent semirings. We give the proof of several characterization results that can be used to guide the design of PDS rational kernels. We also show that some commonly used string kernels or similarity measures such as the edit-distance, the convolution kernels of Haussler, and some string kernels used in the context of computational biology are specific instances of rational kernels. Our results include the proof that the edit-distance over a non-trivial alphabet is not negative definite, which, to the best of our knowledge, was never stated or proved before. Rational kernels can be combined with SVMs to form efficient and powerful techniques for a variety of classification tasks in text and speech processing, or computational biology. We describe examples of general families of PDS rational kernels that are useful in many of these applications and report the result of our experiments illustrating the use of rational kernels in several difficult large-vocabulary spoken-dialog classification tasks based on deployed spoken-dialog systems. Our results show that rational kernels are easy to design and implement and lead to substantial improvements of the classification accuracy.
The Eden System: A Technical Review The Eden project is a five year experiment in designing, building, and using an "integrated distributed" computing system. We are attempting to combine the benefits of integration and distribution by supporting an object based style of programming on top of a node machine/local network hardware base. Our experimental hypothesis is that such an architecture will provide an environment conducive to building distributed applications.
Actions and specificity A solution to the problem of speciflcity in a resource{oriented deductive approach to actions and change is presented. Speciflcity originates in the problem of overloading methods in object oriented frameworks but can be observed in general applications of actions and change in logic. We give a uniform solution to the problem of speciflcity culminating in a completed equational logic program with an equational theory. We show the soundness and completeness of SLDENF{resolution, ie. SLD{resolution augmented by negation{as{failure and by an equational theory, wrt the completed program. Finally, the expressiveness of our approach for performing general reasoning about actions, change, and causality is demonstrated.
Data cache management using frequency-based replacement We propose a new frequency-based replacement algorithm for managing caches used for disk blocks by a file system, database management system, or disk control unit, which we refer to here as data caches. Previously, LRU replacement has usually been used for such caches. We describe a replacement algorithm based on the concept of maintaining reference counts in which locality has been “factored out”. In this algorithm replacement choices are made using a combination of reference frequency and block age. Simulation results based on traces of file system and I/O activity from actual systems show that this algorithm can offer up to 34% performance improvement over LRU replacement, where the improvement is expressed as the fraction of the performance gain achieved between LRU replacement and the theoretically optimal policy in which the reference string must be known in advance. Furthermore, the implementation complexity and efficiency of this algorithm is comparable to one using LRU replacement.
Improving the tolerance of multilayer perceptrons by minimizing the statistical sensitivity to weight deviations This paper proposes a version of the backpropagation algorithm which increases the tolerance of a feedforward neural network against deviations in the weight values. These changes can originate either when the neural network is mapped on a given VLSI circuit where the precision and/or weight matching are low, or by physical defects affecting the neural circuits. The modified backpropagation algorithm we propose uses the statistical sensitivity of the network to changes in the weights as a quantitative measure of network tolerance and attempts to reduce this statistical sensitivity while keeping the figures for the usual training performance (in errors and time) similar to those obtained with the usual backpropagation algorithm.
Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture.
1.020446
0.022493
0.018021
0.014655
0.01304
0.003852
0
0
0
0
0
0
0
0
Tractable Structures for Constraint Satisfaction with Truth Tables The way the graph structure of the constraints influences the complexity of constraint satisfaction problems (CSP) is well understood for bounded-arity constraints. The situation is less clear if there is no bound on the arities. In this case the answer depends also on how the constraints are represented in the input. We study this question for the truth table representation of constraints. We introduce a new hypergraph measure adaptive width and show that CSP with truth tables is polynomial-time solvable if restricted to a class of hypergraphs with bounded adaptive width. Conversely, assuming a conjecture on the complexity of binary CSP, there is no other polynomial-time solvable case. Finally, we present a class of hypergraphs with bounded adaptive width and unbounded fractional hypertree width.
Tractable Hypergraph Properties for Constraint Satisfaction and Conjunctive Queries An important question in the study of constraint satisfaction problems (CSP) is understanding how the graph or hypergraph describing the incidence structure of the constraints influences the complexity of the problem. For binary CSP instances (that is, where each constraint involves only two variables), the situation is well understood: the complexity of the problem essentially depends on the treewidth of the graph of the constraints [Grohe 2007; Marx 2010b]. However, this is not the correct answer if constraints with unbounded number of variables are allowed, and in particular, for CSP instances arising from query evaluation problems in database theory. Formally, if H is a class of hypergraphs, then let CSP(H) be CSP restricted to instances whose hypergraph is in H. Our goal is to characterize those classes of hypergraphs for which CSP(H) is polynomial-time solvable or fixed-parameter tractable, parameterized by the number of variables. Note that in the applications related to database query evaluation, we usually assume that the number of variables is much smaller than the size of the instance, thus parameterization by the number of variables is a meaningful question. The most general known property of H that makes CSP(H) polynomial-time solvable is bounded fractional hypertree width. Here we introduce a new hypergraph measure called submodular width, and show that bounded submodular width of H (which is a strictly more general property than bounded fractional hypertree width) implies that CSP(H) is fixed-parameter tractable. In a matching hardness result, we show that if H has unbounded submodular width, then CSP(H) is not fixed-parameter tractable (and hence not polynomial-time solvable), unless the Exponential Time Hypothesis (ETH) fails. The algorithmic result uses tree decompositions in a novel way: instead of using a single decomposition depending on the hypergraph, the instance is split into a set of instances (all on the same set of variables as the original instance), and then the new instances are solved by choosing a different tree decomposition for each of them. The reason why this strategy works is that the splitting can be done in such a way that the new instances are “uniform” with respect to the number extensions of partial solutions, and therefore the number of partial solutions can be described by a submodular function. For the hardness result, we prove via a series of combinatorial results that if a hypergraph H has large submodular width, then a 3SAT instance can be efficiently simulated by a CSP instance whose hypergraph is H. To prove these combinatorial results, we need to develop a theory of (multicommodity) flows on hypergraphs and vertex separators in the case when the function b(S) defining the cost of separator S is submodular, which can be of independent interest.
On the Desirability of Acyclic Database Schemes A class of database schemes, called acychc, was recently introduced. It is shown that this class has a number of desirable properties. In particular, several desirable properties that have been studied by other researchers m very different terms are all shown to be eqmvalent to acydicity. In addition, several equivalent charactenzauons of the class m terms of graphs and hypergraphs are given, and a smaple algorithm for determining acychclty is presented. Also given are several eqmvalent characterizations of those sets M of multivalued dependencies such that M is the set of muRlvalued dependencies that are the consequences of a given join dependency. Several characterizations for a conflict-free (in the sense of Lien) set of muluvalued dependencies are provided.
Fixed-Parameter Algorithms For Artificial Intelligence, Constraint Satisfaction and Database Problems We survey the parameterized complexity of problems that arise in artificial intelligence, database theory and automated reasoning. In particular, we consider various parameterizations of the constraint satisfaction problem, the evaluation problem of Boolean conjunctive database queries and the propositional satisfiability problem. Furthermore, we survey parameterized algorithms for problems arising in the context of the stable model semantics of logic programs, for a number of other problems of non-monotonic reasoning, and for the computation of cores in data exchange.
A perspective on assumption-based truth maintenance
Tree clustering for constraint networks The paper offers a systematic way of regrouping constraints into hierarchical structures capable of supporting search without backtracking. The method involves the formation and preprocessing of an acyclic database that permits a large variety of queries and local perturbations to be processed swiftly, either by sequential backtrack-free procedures, or by distributed constraint propagation processes.
Stable Model Semantics of Weight Constraint Rules A generalization of logic program rules is proposed where rules are built from weight constraints with type information for each predicate instead of simple literals. These kinds of constraints are useful for concisely representing different kinds of choices as well as cardinality, cost and resource constraints in combinatorial problems such as product configuration. A declarative semantics for the rules is presented which generalizes the stable model semantics of normal logic programs. It is shown that for ground rules the complexity of the relevant decision problems stays in NP. The first implementation of the language handles a decidable subset where function symbols are not allowed. It is based on a new procedure for computing stable models for ground rules extending normal programs with choice and weight constructs and a compilation technique where a weight rule with variables is transformed to a set of such simpler ground rules.
Symmetry Reduction for SAT Representations of Transition Systems Symmetries are inherent in systems that consist of several interchangeable objects or components. When reasoning about such systems, big computational savings can be ob- tained if the presence of symmetries is recognized. In ear- lier work, symmetries in constraint satisfaction problems have been handled by introducing symmetry-breaking con- straints. In reasoning about transition systems, notably in model-checking and reachability analysis in computer-aided verification, symmetries have been handled by symmetry re- duction algorithms that eliminate redundant search caused by symmetries. In this work, we investigate symmetry handling in a problem in the intersection of these two areas: handling symmetries in representations of transition systems in the propositional logic. The problem shows up in representations of AI plan- ning as a satisfiability problem, and in recent approaches to model-checking that represent transition systems as propo- sitional formulae. Symmetry-breaking constraints can be added to the propositional logic representation of transition sequences for removing all the symmetry at one point of time, but removing symmetry from the whole transition sequence is much more difficult, and has not been addressed in earlier work. We present a solution to the problem.
Planning for contingencies: a decision-based approach A fundamental assumption made by classical AI planners is that there is no uncertainty in the world: the planner has full knowledge of the conditions under which the plan will be executed and the outcome of every action is fully predictable. These planners cannot therefore construct contingency plans, i.e., plans in which diffierent actions are performed in diffierent circumstances. In this paper we discuss some issues that arise in the representation and construction of contingency plans and describe Cassandra, a partial-order contingency planner. Cassandra uses explicit decision-steps that enable the agent executing the plan to decide which plan branch to follow. The decision-steps in a plan result in subgoals to acquire knowledge, which are planned for in the same way as any other subgoals. Cassandra thus distinguishes the process of gathering information from the process of making decisions. The explicit representation of decisions in Cassandra allows a coherent approach to the problems of contingent planning, and provides a solid base for extensions such as the use of diffierent decision-making procedures.
Information and control in gray-box systems In modern systems, developers are often unable to modify the underlying operating system. To build services in such an environment, we advocate the use of gray-box techniques. When treating the operating system as a gray-box, one recognizes that not changing the OS restricts, but does not completely obviate, both the information one can acquire about the internal state of the OS and the control one can impose on the OS. In this paper, we develop and investigate three gray-box Information and Control Layers (ICLs) for determining the contents of the file-cache, controlling the layout of files across local disk, and limiting process execution based on available memory. A gray-box ICL sits between a client and the OS and uses a combination of algorithmic knowledge, observations, and inferences to garner information about or control the behavior of a gray-box system. We summarize a set of techniques that are helpful in building gray-box ICLs and have begun to organize a "gray toolbox" to ease the construction of ICLs. Through our case studies, we demonstrate the utility of gray-box techniques, by implementing three useful "OS-like" services without the modification of a single line of OS source code.
Distributed, object-based programming systems The development of distributed operating systems and object-based programming languages makes possible an environment in which programs consisting of a set of interacting modules, or objects, may execute concurrently on a collection of loosely coupled processors. An object-based programming language encourages a methodology for designing and creating a program as a set of autonomous components, whereas a distributed operating system permits a collection of workstations or personal computers to be treated as a single entity. The amalgamation of these two concepts has resulted in systems that shall be referred to as distributed, object-based programming systems. This paper discusses issues in the design and implementation of such systems. Following the presentation of fundamental concepts and various object models, issues in object management, object interaction management, and physical resource management are discussed. Extensive examples are drawn from existing systems.
Near-Optimal Parallel Prefetching and Caching Recently there has been a great deal of interest in the operating systems research community in prefetching and caching data from parallel disks, as a technique for enabling serial applications to improve input--output (I/O) performance. In this paper, algorithms are considered for integrated prefetching and caching in a model with a fixed-size cache and any number of backing storage devices (disks). The integration of caching and prefetching with a single disk was previously considered by Cao, Felten, Karlin, and Li. Here, it is shown that the natural extension of their aggressive algorithm to the parallel disk case is suboptimal by a factor near the number of disks in the worst case. The main result is a new algorithm, reverse aggressive, with near-optimal performance for integrated prefetching and caching in the presence of multiple disks.
Towards application/file-level characterization of block references: a case for fine-grained buffer management Two contributions are made in this paper. First, we show that system level characterization of file block references is inadequate for maximizing buffer cache performance. We show that a finer-grained characterization approach is needed. Though application level characterization methods have been proposed, this is the first attempt, to the best of our knowledge, to consider file level characterizations. We propose an Application/File-level Characterization (AFC) scheme where we detect on-line the reference characteristics at the application level and then at the file level, if necessary. The results of this characterization are used to employ appropriate replacement policies in the buffer cache to maximize performance. The second contribution is in proposing an efficient and fair buffer allocation scheme. Application or file level resource management is infeasible unless there exists an allocation scheme that is efficient and fair. We propose the &Dgr;HIT allocation scheme that takes away a block from the application/file where the removal results in the smallest reduction in the number of expected buffer cache hits. Both the AFC and &Dgr;HIT schemes are on-line schemes that detect and allocate as applications execute. Experiments using trace-driven simulations show that substantial performance improvements can be made. For single application executions the hit ratio increased an average of 13 percentage points compared to the LRU policy, with a maximum increase of 59 percentage points, while for multiple application executions, the increase is an average of 12 percentage points, with a maximum of 32 percentage points for the workloads considered.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.22105
0.110525
0.032055
0.02211
0.000985
0.000574
0.000164
0.000001
0
0
0
0
0
0
Deep extreme learning machines: supervised autoencoding architecture for classification. We present a method for synthesising deep neural networks using Extreme Learning Machines (ELMs) as a stack of supervised autoencoders. We test the method using standard benchmark datasets for multi-class image classification (MNIST, CIFAR-10 and Google Streetview House Numbers (SVHN)), and show that the classification error rate can progressively improve with the inclusion of additional autoencoding ELM modules in a stack. Moreover, we found that the method can correctly classify up to 99.19% of MNIST test images, which surpasses the best error rates reported for standard 3-layer ELMs or previous deep ELM approaches when applied to MNIST. The approach simultaneously offers a significantly faster training algorithm to achieve its best performance (in the order of 5min on a four-core CPU for MNIST) relative to a single ELM with the same total number of hidden units as the deep ELM, hence offering the best of both worlds: lower error rates and fast implementation.
Stability analysis of Markovian jumping stochastic Cohen-Grossberg neural networks with mixed time delays. In this letter, the global asymptotical stability analysis problem is considered for a class of Markovian jumping stochastic Cohen-Grossberg neural networks (CGNNs) with mixed delays including discrete delays and distributed delays. An alternative delay-dependent stability analysis result is established based on the linear matrix inequality (LMI) technique, which can easily be checked by utilizing the numerically efficient Matlab LMI toolbox. Neither system transformation nor free-weight matrix via Newton-Leibniz formula is required. Two numerical examples are included to show the effectiveness of the result.
Novel weighting-delay-based stability criteria for recurrent neural networks with time-varying delay. In this paper, a weighting-delay-based method is developed for the study of the stability problem of a class of recurrent neural networks (RNNs) with time-varying delay. Different from previous results, the delay interval [0, d(t)] is divided into some variable subintervals by employing weighting delays. Thus, new delay-dependent stability criteria for RNNs with time-varying delay are derived by applying this weighting-delay method, which are less conservative than previous results. The proposed stability criteria depend on the positions of weighting delays in the interval [0, d(t)] , which can be denoted by the weighting-delay parameters. Different weighting-delay parameters lead to different stability margins for a given system. Thus, a solution based on optimization methods is further given to calculate the optimal weighting-delay parameters. Several examples are provided to verify the effectiveness of the proposed criteria.
Automatic early stopping using cross validation: quantifying the criteria. Cross validation can be used to detect when overfitting starts during supervised training of a neural network; training is then stopped before convergence to avoid the overfitting ('early stopping'). The exact criterion used for cross validation based early stopping, however, is chosen in an ad-hoc fashion by most researchers or training is stopped interactively. To aid a more well-founded selection of the stopping criterion, 14 different automatic stopping criteria from three classes were evaluated empirically for their efficiency and effectiveness in 12 different classification and approximation tasks using multi-layer perceptrons with RPROP training. The experiments show that, on average, slower stopping criteria allow for small improvements in generalization (in the order of 4%), but cost about a factor of 4 longer in training time.
Extracting and composing robust features with denoising autoencoders Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.
Logic Programming and Negation: A Survey. We survey here various approaches which were proposed to incorporate negation in logicprograms. We concentrate on the proof-theoretic and model-theoretic issues and the relationshipsbetween them.1991 Mathematics Subject Classification: 68Q40, 68T15.CR Categories: F.3.2., F.4.1, H.3.3, I.2.3.Keywords and Phrases: negation, general logic programs, non-monotonic reasoning.Notes. The work of the first author was partly supported by ESPRIT Basic Research Action6810 (Compulog 2). The work...
On the Desirability of Acyclic Database Schemes A class of database schemes, called acychc, was recently introduced. It is shown that this class has a number of desirable properties. In particular, several desirable properties that have been studied by other researchers m very different terms are all shown to be eqmvalent to acydicity. In addition, several equivalent charactenzauons of the class m terms of graphs and hypergraphs are given, and a smaple algorithm for determining acychclty is presented. Also given are several eqmvalent characterizations of those sets M of multivalued dependencies such that M is the set of muRlvalued dependencies that are the consequences of a given join dependency. Several characterizations for a conflict-free (in the sense of Lien) set of muluvalued dependencies are provided.
Consensus and Cooperation in Networked Multi-Agent Systems? This paper provides a theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow, robustness to changes in network topology due to link/node failures, time-delays, and performance guarantees. An overview of basic concepts of information consensus in networks and methods of convergence and performance analys...
Expressiveness and tractability in knowledge representation and reasoning
Planning as search: a quantitative approach We present the thesis that planning can be viewed as problem-solving search using subgoals, macro-operators, and abstraction as knowledge sources. Our goal is to quantify problem-solving performance using these sources of knowledge. New results include the identification of subgoal distance as a fundamental measure of problem difficulty, a multiplicative time-space tradeoff for macro-operators, and an analysis of abstraction which concludes that abstraction hierarchies can reduce exponential problems to linear complexity.
Simultaneous Pipelining in QPipe: Exploiting Work Sharing Opportunities Across Queries Data warehousing and scientific database applications operate on massive datasets and are characterized by complex queries accessing large portions of the database. Concurrent queries often exhibit high data and computation overlap, e.g., they access the same relations on disk, compute similar aggregates, or share intermediate results. Unfortunately, run-time sharing in modern database engines is limited by the paradigm of invoking an independent set of operator instances per query, potentially missing sharing opportunities if the buffer pool evicts data early.
Optimizing the Embedded Caching and Prefetching Software on a Network-Attached Storage System As the speed gap between memory and disk is so large today, caching and prefetch are critical to enterprise class storage applications, which demands high performance. In this paper, we present our study on performance of a mid-range storage server produced by the Quanta Computer Incorporation. We first analyzed the existing caching mechanism in the server and then developed a fast caching methodology to reduce the cache access latency and processing overhead of the storage controller. In addition, we proposed a new adaptive prefetch scheme reduces the average disk access time seen by the host. Via trace-driven simulation, we evaluated the performance of our new caching and adaptive prefetch schemes. Our results showed the performance improvement for the TPC-C on-line transaction benchmark.
Scheduling parallel I/O operations The I/O bottleneck in parallel computer systems has recently begun receiving increasing interest. Most attention has focused on improving the performance of I/O devices using fairly low-level parallelism in techniques such as disk striping and interleaving. Widely applicable solutions, however, will require an integrated approach which addresses the problem at multiple system levels, including applications, systems software, and architecture. We propose that within the context of such an integrated approach, scheduling parallel I/O operations will become increasingly attractive and can potentially provide substantial performance benefits.We describe a simple I/O scheduling problem and present approximate algorithms for its solution. The costs of using these algorithms in terms of execution time, and the benefits in terms of reduced time to complete a batch of I/O operations, are compared with the situations in which no scheduling is used, and in which an optimal scheduling algorithm is used. The comparison is performed both theoretically and experimentally. We have found that, in exchange for a small execution time overhead, the approximate scheduling algorithms can provide substantial improvements in I/O completion times.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.1
0.05
0.025
0.008333
0.000353
0
0
0
0
0
0
0
0
0
A regularized correntropy framework for robust pattern recognition This letter proposes a new multiple linear regression model using regularized correntropy for robust pattern recognition. First, we motivate the use of correntropy to improve the robustness of the classical mean square error (MSE) criterion that is sensitive to outliers. Then an l1 regularization scheme is imposed on the correntropy to learn robust and sparse representations. Based on the half-quadratic optimization technique, we propose a novel algorithm to solve the nonlinear optimization problem. Second, we develop a new correntropy-based classifier based on the learned regularization scheme for robust object recognition. Extensive experiments over several applications confirm that the correntropy-based l1 regularization can improve recognition accuracy and receiver operator characteristic curves under noise corruption and occlusion.
The C-loss function for pattern classification This paper presents a new loss function for neural network classification, inspired by the recently proposed similarity measure called Correntropy. We show that this function essentially behaves like the conventional square loss for samples that are well within the decision boundary and have small errors, and L"0 or counting norm for samples that are outliers or are difficult to classify. Depending on the value of the kernel size parameter, the proposed loss function moves smoothly from convex to non-convex and becomes a close approximation to the misclassification loss (ideal 0-1 loss). We show that the discriminant function obtained by optimizing the proposed loss function in the neighborhood of the ideal 0-1 loss function to train a neural network is immune to overfitting, more robust to outliers, and has consistent and better generalization performance as compared to other commonly used loss functions, even after prolonged training. The results also show that it is a close competitor to the SVM. Since the proposed method is compatible with simple gradient based online learning, it is a practical way of improving the performance of neural network classifiers.
A loss function for classification based on a robust similarity metric We present a margin-based loss function for classification, inspired by the recently proposed similarity measure called correntropy. We show that correntropy induces a nonconvex loss function that is a closer approximation to the misclassification loss (ideal 0-1 loss). We show that the discriminant function obtained by optimizing the proposed loss function using a neural network is insensitive to outliers and has better generalization performance as compared to using the squared loss function which is common in neural network classifiers. The proposed method of training classifiers is a practical way of obtaining better results on real world classification problems, that uses a simple gradient based online training procedure for minimizing the empirical risk.
Extreme Learning Classifier with Deep Concepts.
Image Denoising and Inpainting with Deep Neural Networks. We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method achieves state-of-the-art performance in the image denoising task. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning.
Decoding by linear programming This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f∈Rn from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1-minimization problem (||x||ℓ1:=Σi|xi|) min(g∈Rn) ||y - Ag||ℓ1 provided that the support of the vector of errors is not too large, ||e||ℓ0:=|{i:ei ≠ 0}|≤ρ·m for some ρ0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of ℓ1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.
Convex Neural Networks Convexity has recently received a lot of attention in the machine learning community, and the lack of convexity has been seen as a major disad- vantage of many learning algorithms, such as multi-layer artificial neural networks. We show that training multi-layer neural networks in which the number of hidden units is learned can be viewed as a convex optimization problem. This problem involves an infinite number of variables, but can be solved by incrementally inserting a hidden unit at a time, each time finding a linear classifier that minimizes a weighted sum of errors.
Learning methods for generic object recognition with invariance to pose and lighting We assess the applicability of several popular learning methods for the problem of recognizing generic visual categories with invariance to pose, lighting, and surrounding clutter. A large dataset comprising stereo image pairs of 50 uniform-colored toys under 36 azimuths, 9 elevations, and 6 lighting conditions was collected (for a total of 194,400 individual images). The objects were 10 instances of 5 generic categories: four-legged animals, human figures, airplanes, trucks, and cars. Five instances of each category were used for training, and the other five for testing. Low-resolution grayscale images of the objects with various amounts of variability and surrounding clutter were used for training and testing. Nearest Neighbor methods, Support Vector Machines, and Convolutional Networks, operating on raw pixels or on PCA-derived features were tested. Test error rates for unseen object instances placed on uniform backgrounds were around 13% for SVM and 7% for Convolutional Nets. On a segmentation/recognition task with highly cluttered images, SVM proved impractical, while Convolutional nets yielded 16/7% error. A real-time version of the system was implemented that can detect and classify objects in natural scenes at around 10 frames per second.
Factored conditional restricted Boltzmann Machines for modeling motion style The Conditional Restricted Boltzmann Machine (CRBM) is a recently proposed model for time series that has a rich, distributed hidden state and permits simple, exact inference. We present a new model, based on the CRBM that preserves its most important computational properties and includes multiplicative three-way interactions that allow the effective interaction weight between two units to be modulated by the dynamic state of a third unit. We factor the three-way weight tensor implied by the multiplicative model, reducing the number of parameters from O(N3) to O(N2). The result is an efficient, compact model whose effectiveness we demonstrate by modeling human motion. Like the CRBM, our model can capture diverse styles of motion with a single set of parameters, and the three-way interactions greatly improve the model's ability to blend motion styles or to transition smoothly among them.
The Boolean hierarchy: hardware over NP In this paper, we study the complexity of sets formed by boolean operations $(\bigcup, \bigcap,$ and complementation) on NP sets. These are the sets accepted by trees of hardware with NP predicates as leaves, and together form the boolean hierarchy. We present many results about the boolean hierarchy: separation and immunity results, complete languages, upward separations, connections to sparse oracles for NP, and structural asymmetries between complementary classes. Some results present new ideas and techniques. Others put previous results about NP and $D^{P}$ in a richer perspective. Throughout, we emphasize the structure of the boolean hierarchy and its relations with more common classes.
AMUSE: a minimally-unsatisfiable subformula extractor This paper describes a new algorithm for extracting unsatisfiable subformulas from a given unsatisfiable CNF formula. Such unsatisfiable "cores" can be very helpful in diagnosing the causes of infeasibility in large systems. Our algorithm is unique in that it adapts the "learning process" of a modern SAT solver to identify unsatisfiable subformulas rather than search for satisfying assignments. Compared to existing approaches, this method can be viewed as a bottom-up core extraction procedure which can be very competitive when the core sizes are much smaller than the original formula size. Repeated runs of the algorithm with different branching orders yield different cores. We present experimental results on a suite of large automotive benchmarks showing the performance of the algorithm and highlighting its ability to locate not just one but several cores.
Near-Optimal Parallel Prefetching and Caching Recently there has been a great deal of interest in the operating systems research community in prefetching and caching data from parallel disks, as a technique for enabling serial applications to improve input--output (I/O) performance. In this paper, algorithms are considered for integrated prefetching and caching in a model with a fixed-size cache and any number of backing storage devices (disks). The integration of caching and prefetching with a single disk was previously considered by Cao, Felten, Karlin, and Li. Here, it is shown that the natural extension of their aggressive algorithm to the parallel disk case is suboptimal by a factor near the number of disks in the worst case. The main result is a new algorithm, reverse aggressive, with near-optimal performance for integrated prefetching and caching in the presence of multiple disks.
WOW: wise ordering for writes - combining spatial and temporal locality in non-volatile caches Write caches using fast, non-volatile storage are now widely used in modern storage controllers since they enable hiding latency on writes. Effective algorithms for write cache management are extremely important since (i) in RAID-5, due to read-modify-write and parity updates, each write may cause up to four separate disk seeks while a read miss causes only a single disk seek; and (ii) typically, write cache size is much smaller than the read cache size - a proportion of 1 : 16 is typical. A write caching policy must decide: what data to destage. On one hand, to exploit temporal locality, we would like to destage data that is least likely to be re-written soon with the goal of minimizing the total number of destages. This is normally achieved using a caching algorithm such as LRW (least recently written). However, a read cache has a very small uniform cost of replacing any data in the cache, whereas the cost of destaging depends on the state of the disk heads. Hence, on the other hand, to exploit spatial locality, we would like to destage writes so as to minimize the average cost of each destage. This can be achieved by using a disk scheduling algorithm such as CSCAN, that destages data in the ascending order of the logical addresses, at the higher level of the write cache in a storage controller. Observe that LRW and CSCAN focus, respectively, on exploiting either temporal or spatial locality, but not both simultaneously. We propose a new algorithm, namely, Wise Ordering for Writes (WOW), for write cache management that effectively combines and balances temporal and spatial locality. Our experimental set-up consisted of an IBM xSeries 345 dual processor server running Linux that is driving a (software) RAID-5 or RAID-10 array using a workload akin to Storage Performance Council's widely adopted SPC-1 benchmark. In a cache-sensitive configuration on RAID-5, WOW delivers peak throughput that is 129% higher than CSCAN and 9% higher than LRW. In a cache-insensitive configuration on RAID-5, WOW and CSCAN deliver peak throughput that is 50% higher than LRW. For a random write workload with nearly 100% misses, on RAID-10, with a cache size of 64K, 4KB pages (256MB), WOW and CSCAN deliver peak throughput that is 200% higher than LRW. In summary, WOW has better or comparable peak throughput to the best of CSCAN and LRW across a wide gamut of write cache sizes and workload configurations. In addition, even at lower throughputs, WOW has lower average response times than CSCAN and LRW.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.105
0.11
0.11
0.036667
0.005789
0.000557
0.000025
0.000005
0.000001
0
0
0
0
0
Simplifying Lexical Simplification: Do We Need Simplified Corpora? Simplification of lexically complex texts, by replacing complex words with their simpler synonyms, helps non-native speakers, children, and language-impaired people understand text better. Recent lexical simplification methods rely on manually simplified corpora, which are expensive and time-consuming to build. We present an unsupervised approach to lexical simplification that makes use of the most recent word vector representations and requires only regular corpora. Results of both automated and human evaluation show that our simple method is as effective as systems that rely on simplified corpora.
Unsupervised Lexical Simplification for Non-Native Speakers. Lexical Simplification is the task of replacing complex words with simpler alternatives. We propose a novel, unsupervised approach for the task. It relies on two resources: a corpus of subtitles and a new type of word embeddings model that accounts for the ambiguity of words. We compare the performance of our approach and many others over a new evaluation dataset, which accounts for the simplification needs of 400 non-native English speakers. The experiments show that our approach outperforms state-of-the-art work in Lexical Simplification.
Learning A Lexical Simplifier Using Wikipedia In this paper we introduce a new lexical simplification approach. We extract over 30K candidate lexical simplifications by identifying aligned words in a sentence-aligned corpus of English Wikipedia with Simple English Wikipedia. To apply these rules, we learn a feature-based ranker using SVMnk trained on a set of labeled simplifications collected using Amazon's Mechanical Turk. Using human simplifications for evaluation, we achieve a precision of 76% with changes in 86% of the examples.
Text simplification for language learners: a corpus analysis.
SemEval-2012 task 1: English Lexical Simplification We describe the English Lexical Simplification task at SemEval-2012. This is the first time such a shared task has been organized and its goal is to provide a framework for the evaluation of systems for lexical simplification and foster research on context-aware lexical simplification approaches. The task requires that annotators and systems rank a number of alternative substitutes -- all deemed adequate -- for a target word in context, according to how "simple" these substitutes are. The notion of simplicity is biased towards non-native speakers of English. Out of nine participating systems, the best scoring ones combine context-dependent and context-independent information, with the strongest individual contribution given by the frequency of the substitute regardless of its context.
Effect of Non-linear Deep Architecture in Sequence Labeling.
A sufficient condition for backtrack-bounded search Backtrack search is often used to solve constraint satisfaction problems. A relationship involving the structure of the constraints is described that provides a bound on the backtracking required to advance deeper into the backtrack tree. This analysis leads to upper bounds on the effort required for solution of a class of constraint satisfaction problems. The solutions involve a combination of relaxation preprocessing and backtrack search. The bounds are expressed in terms of the structure of the constraint connections. Specifically, the effort is shown to have a bound exponential in the size of the largest biconnected component of the constraint graph, as opposed to the size of the graph as a whole.
Trace driven analysis of write caching policies for disks The I/O subsystem in a computer system is becoming the bottleneck as a result of recent dramatic improvements in processor speeds. Disk caches have been effective in closing this gap but the benefit is restricted to the read operations as the write I/Os are usually committed to disk to maintain consistency and to allow for crash recovery. As a result, write I/O traffic is becoming dominant and solutions to alleviate this problem are becoming increasingly important. A simple solution which can easily work with existing tile systems is to use non-volatile disk caches together with a write-behind strategy. In this study, we look at the issues around managing such a cache using a detailed trace driven simulation. Traces from three different commercial sites are used in the analysis of various policies for managing the write cache.We observe that even a simple write-behind policy for the write cache is effective in reducing the total number of writes by over 50%. We further observe that the use of hysteresis in the policy to purge the write cache, with two thresholds, yields substantial improvement over a single threshold scheme. The inclusion of a mechanism to piggyback blocks from the write cache with read miss I/Os further reduces the number of writes to only about 15% of the original total number of write operations. We compare two piggybacking options and also study the impact of varying the write cache size. We briefly looked at the case of a single non-volatile disk cache to estimate the performance impact of statically partitioning the cache for reads and writes.
The Complexity of Reasoning with Global Constraints Constraint propagation is one of the techniques central to the success of constraint programming. To reduce search, fast algorithms associated with each constraint prune the domains of variables. With global (or non-binary) constraints, the cost of such propagation may be much greater than the quadratic cost for binary constraints. We therefore study the computational complexity of reasoning with global constraints. We first characterise a number of important questions related to constraint propagation. We show that such questions are intractable in general, and identify dependencies between the tractability and intractability of the different questions. We then demonstrate how the tools of computational complexity can be used in the design and analysis of specific global constraints. In particular, we illustrate how computational complexity can be used to determine when a lesser level of local consistency should be enforced, when constraints can be safely generalized, when decomposing constraints will reduce the amount of pruning, and when combining constraints is tractable.
Safety, Visibility, and Performance in a Wide-Area File System As mobile clients travel, their costs to reach home filing services change, with serious performance implications. Current file systems mask these performance problems by reducing the safety of updates, their visibility, or both. This is the result of combining the propagation and notification of updates from clients to servers.Fluid Replication separates these mechanisms. Client updates are shipped to nearby replicas, called WayStations, rather than remote servers, providing inexpensive safety. WayStations and servers periodically exchange knowledge of updates through reconciliation, providing a tight bound on the time until updates are visible. Reconciliation is non-blocking, and update contents are not propagated immediately; propagation is deferred to take advantage of the low incidence of sharing in file systems.Our measurements of a Fluid Replication prototype show that update performance is completely independent of wide-area networking costs, at the expense of increased sharing costs. This places the costs of sharing on those who require it, preserving common case performance. Furthermore, the benefits of independent update outweigh the costs of sharing for a workload with substantial sharing. A trace-based simulation shows that a modest reconciliation interval of 15 seconds can eliminate 98% of all stale accesses. Furthermore, our traced clients could collectively expect availability of five nines, even with deferred propagation of updates.
Traveling to Rome: QoS Specifications for Automated Storage System Management The design and operation of very large-scale storage systems is an area ripe for application of automated design and management techniques - and at the heart of such techniques is the need to represent storage system QoS in many guises: the goals (service level requirements) for the storage system, predictions for the design that results, enforcement constraints for the runtime system to guarantee, and observations made of the system as it runs. Rome is the information model that the Storage Systems Program at HP Laboratories has developed to address these needs. We use it as an "information bus" to tie together our storage system design, configuration, and monitoring tools. In 5 years of development, Rome is now on its third iteration; this paper describes its information model, with emphasis on the QoS-related components, and presents some of the lessons we have learned over the years in using it.
Striping in a RAID level 5 disk array Redundant disk arrays are an increasingly popular way to improve I/O system performance. Past research has studied how to stripe data in non-redundant (RAID Level 0) disk arrays, but none has yet been done on how to stripe data in redundant disk arrays such as RAID Level 5, or on how the choice of striping unit varies with the number of disks. Using synthetic workloads, we derive simple design rules for striping data in RAID Level 5 disk arrays given varying amounts of workload information. We then validate the synthetically derived design rules using real workload traces to show that the design rules apply well to real systems.We find no difference in the optimal striping units for RAID Level 0 and 5 for read-intensive workloads. For write-intensive workloads, in contrast, the overhead of maintaining parity causes full-stripe writes (writes that span the entire error-correction group) to be more efficient than read-modify writes or reconstruct writes. This additional factor causes the optimal striping unit for RAID Level 5 to be four times smaller for write-intensive workloads than for read-intensive workloads.We next investigate how the optimal striping unit varies with the number of disks in an array. We find that the optimal striping unit for reads in a RAID Level 5 varies inversely to the number of disks, but that the optimal striping unit for writes varies with the number of disks. Overall, we find that the optimal striping unit for workloads with an unspecified mix of reads and writes is independent of the number of disks.Together, these trends lead us to recommend (in the absence of specific workload information) that the striping unit over a wide range of RAID Level 5 disk array sizes be equal to 1/2 * average positioning time * disk transfer rate.
Spatial computing: an emerging paradigm for autonomic computing and communication Emerging distributed computing scenarios call for novel “autonomic” approaches to distributed systems development and management. In this position paper we analyze the distinguishing characteristics of those scenarios, discuss the inadequacy of traditional paradigms, and elaborate on primary role of “space” in modern distributed computing. In particular, we show that spatial abstractions promise to be basic necessary ingredients for a novel “spatial computing” paradigm, acting as a unifying framework for autonomic computing and communication. On this base, we propose a preliminary “spatial computing stack” to frame the key concepts and mechanisms of spatial computing. Eventually, we try to sketch a research agenda in the area.
Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture.
1.028844
0.028571
0.024263
0.018039
0.013248
0.000071
0
0
0
0
0
0
0
0
Analysis of multiple process flows in an ASIC fab with a detailed photolithography area model ASIC fabs are characterized by multiple process flows. This is mainly due to the highly diversified product portfolios within such fabs. In this study, we first examined the cycle time for individual process flows in a medium volume ASIC fab. We compared these process flows in terms of overall cycle time and using a cycle time index. Secondly, focusing on photolithography we developed a simulation model that employs cycle time data to analyze the impacts of process flow diversity. Thirdly, we used this model to examine the impact on cycle time of changing the volumes of wafer starts on different process flows. The detailed results of simulation experiments along with the concluding remarks are given at the end of the study.
An analysis of tool capabilities in the photolithography area of an ASIC fab Photolithography is generally regarded as the most constraining element in semiconductor manufacturing. This is primarily attributable to the high capital investment and extensive re-entrant flows throughout this section. Cycle time management in this area is crucial to balance the trade off between tool utilization and cycle time. In a low volume, high product mix fab the inclusion of tool capabilities, and their status, can significantly affect tool utilization and overall cycle times. In this paper a simulation model is developed to aid cycle time decision making policies in the photolithography section of a low volume, high product mix fab. The objective of the study is to determine the optimum course of action, for varying levels of expected increased demand, while maintaining acceptable cycle times and minimizing total capital spent in photolithography. The actions reviewed include the increased use of capabilities where available, followed by the purchase of new photolithography equipment.
A rapid modeling technique for measurable improvements in factory performance This paper discusses a methodology for quickly investigating problem areas in semiconductor wafer fabrication factories by creating a model for the production area of interest only (as opposed to a model of the complete factory operation). All other factory operations are treated as “black boxes”. Specific assumptions are made to capture the effect of re-entrant flow. This approach allows a rapid response to production questions when beginning a new simulation project. The methodology was applied to a cycle-time and capacity analysis of the photolithography operation for Siemens' Dresden wafer fab. The results of this simulation study are presented
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Logic Programming and Negation: A Survey. We survey here various approaches which were proposed to incorporate negation in logicprograms. We concentrate on the proof-theoretic and model-theoretic issues and the relationshipsbetween them.1991 Mathematics Subject Classification: 68Q40, 68T15.CR Categories: F.3.2., F.4.1, H.3.3, I.2.3.Keywords and Phrases: negation, general logic programs, non-monotonic reasoning.Notes. The work of the first author was partly supported by ESPRIT Basic Research Action6810 (Compulog 2). The work...
A sufficient condition for backtrack-bounded search Backtrack search is often used to solve constraint satisfaction problems. A relationship involving the structure of the constraints is described that provides a bound on the backtracking required to advance deeper into the backtrack tree. This analysis leads to upper bounds on the effort required for solution of a class of constraint satisfaction problems. The solutions involve a combination of relaxation preprocessing and backtrack search. The bounds are expressed in terms of the structure of the constraint connections. Specifically, the effort is shown to have a bound exponential in the size of the largest biconnected component of the constraint graph, as opposed to the size of the graph as a whole.
Convergence of a Nonconforming Multiscale Finite Element Method The multiscale finite element method (MsFEM) [T. Y. Hou, X. H. Wu, and Z. Cai, Math. Comp., 1998, to appear; T. Y. Hou and X. H. Wu, J. Comput. Phys., 134 (1997), pp. 169--189] has been introduced to capture the large scale solutions of elliptic equations with highly oscillatory coefficients. This is accomplished by constructing the multiscale base functions from the local solutions of the elliptic operator. Our previous study reveals that the leading order error in this approach is caused by the ``resonant sampling,'' which leads to large error when the mesh size is close to the small scale of the continuous problem. Similar difficulty also arises in numerical upscaling methods. An oversampling technique has been introduced to alleviate this difficulty [T. Y. Hou and X. H. Wu, J. Comput. Phys., 134 (1997), pp. 169--189]. A consequence of the oversampling method is that the resulting finite element method is no longer conforming. Here we give a detailed analysis of the nonconforming error. Our analysis also reveals a new cell resonance error which is caused by the mismatch between the mesh size and the wavelength of the small scale. We show that the cell resonance error is of lower order. Our numerical experiments demonstrate that the cell resonance error is generically small and is difficult to observe in practice.
Efficient sparse coding algorithms Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that cap- ture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimiza- tion problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field sur- round suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.
Synchronized Disk Interleaving A group of disks may be interleaved to speed up data transfers in a manner analogous to the speedup achieved by main memory interleaving. Conventional disks may be used for interleaving by spreading data across disks and by treating multiple disks as if they were a single one. Furthermore, the rotation of the interleaved disks may be synchronized to simplify control and also to optimize performance. In addition, check- sums may be placed on separate check-sum disks in order to improve reliability. In this paper, we study synchronized disk interleaving as a high-performance mass storage system architecture. The advantages and limitations of the proposed disk interleaving scheme are analyzed using the M/G/1 queueing model and compared to the conventional disk access mechanism.
Downward Separation Fails Catastrophically for Limited Nondeterminism Classes The $\beta$ hierarchy consists of classes $\beta_k={\rm NP}[logkn]\subseteq {\rm NP}$. Unlike collapses in the polynomial hierarchy and the Boolean hierarchy, collapses in the $\beta$ hierarchy do not seem to translate up, nor does closure under complement seem to cause the hierarchy to collapse. For any consistent set of collapses and separations of levels of the hierarchy that respects ${\rm P} = \beta_1\subseteq \beta_2\subseteq \cdots \subseteq {\rm NP}$, we can construct an oracle relative to which those collapses and separations hold; at the same time we can make distinct levels of the hierarchy closed under computation or not, as we wish. To give two relatively tame examples: for any $k \geq 1$, we construct an oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} \neq \beta_{k+2} \neq \cdots \] and another oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} = {\rm PSPACE}. \] We also construct an oracle relative to which $\beta_{2k} = \beta_{2k+1} \neq \beta_{2k+2}$ for all k.
Normal forms for answer sets programming Normal forms for logic programs under stable/answer set semantics are introduced. We argue that these forms can simplify the study of program properties, mainly consistency. The first normal form, called the kernel of the program, is useful for studying existence and number of answer sets. A kernel program is composed of the atoms which are undefined in the Well-founded semantics, which are those that directly affect the existence of answer sets. The body of rules is composed of negative literals only. Thus, the kernel form tends to be significantly more compact than other formulations. Also, it is possible to check consistency of kernel programs in terms of colorings of the Extended Dependency Graph program representation which we previously developed. The second normal form is called 3-kernel. A 3-kernel program is composed of the atoms which are undefined in the Well-founded semantics. Rules in 3-kernel programs have at most two conditions, and each rule either belongs to a cycle, or defines a connection between cycles. 3-kernel programs may have positive conditions. The 3-kernel normal form is very useful for the static analysis of program consistency, i.e. the syntactic characterization of existence of answer sets. This result can be obtained thanks to a novel graph-like representation of programs, called Cycle Graph which presented in the companion article Costantini (2004b).
A cost-benefit scheme for high performance predictive prefetching
Scheduling parallel I/O operations The I/O bottleneck in parallel computer systems has recently begun receiving increasing interest. Most attention has focused on improving the performance of I/O devices using fairly low-level parallelism in techniques such as disk striping and interleaving. Widely applicable solutions, however, will require an integrated approach which addresses the problem at multiple system levels, including applications, systems software, and architecture. We propose that within the context of such an integrated approach, scheduling parallel I/O operations will become increasingly attractive and can potentially provide substantial performance benefits.We describe a simple I/O scheduling problem and present approximate algorithms for its solution. The costs of using these algorithms in terms of execution time, and the benefits in terms of reduced time to complete a batch of I/O operations, are compared with the situations in which no scheduling is used, and in which an optimal scheduling algorithm is used. The comparison is performed both theoretically and experimentally. We have found that, in exchange for a small execution time overhead, the approximate scheduling algorithms can provide substantial improvements in I/O completion times.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.2
0.04
0
0
0
0
0
0
0
0
0
0
0
Expressiveness and tractability in knowledge representation and reasoning
The Downward Refinement Property Using abstraction in planning does not guarantee an im­ provement in search efficiency; it is possible for an ab- stract planner to display worse performance than one that does not use abstraction. Analysis and experiments have shown that good abstraction hierarchies have, or are close to having, the downward refinement property, whereby, given that a concrete-level solution exists, every abstract solution can be refined to a concrete-level solu­ tion without backtracking across abstract levels. Work­ ing within a semantics for ABSTRIPS-style abstraction we provide a characterizati on of the downward refinement property. After discussing its effect on search efficiency, we develop a semantic condition sufficient for guarantee­ ing its presence in an abstraction hierarchy. Using the semantic condition, we then provide a set of sufficient and polynomial-time checkable syntactic conditions that can be used for checking a hierarchy for the downward refinement property,
Impediments to Universal preference-based default theories Research on nonmonotonic and default reasoning has identified several important criteria for preferring alternative default inferences. The theories of reasoning based on each of these criteria may uniformly be viewed as theories of rational inference, in which the reasoner selects maximally preferred states of belief. Though researchers have noted some cases of apparent conflict between the preferences supported by different theories, it has been hoped that these special theories of reasoning may be combined into a universal logic of nonmonotonic reasoning. We show that the different categories of preferences conflict more than has been realized, and adapt formal results from social choice theory to prove that every universal theory of default reasoning will violate at least one reasonable principle of rational reasoning. Our results can be interpreted as demonstrating that, within the preferential framework, we cannot expect much improvement on the rigid lexicographic priority mechanisms that have been proposed for conflict resolution.
How long will it take? We present a method for approximating the expected number of steps required by a heuristic search algorithm to reach a goal from any initial state in a problem space. The method is based on a mapping from the original state space to an abstract space in which states are characterized only by a syntactic "distance" from the nearest goal. Modeling the search algorithm as a Markov process in the abstract space yields a simple system of equations for the solution time for each state. We derive some insight into the behavior of search algorithms by examining some closed form solutions for these equations; we also show that many problem spaces have a clearly delineated "easy zone", inside which problems are trivial and outside which problems are impossible. The theory is borne out by experiments with both Markov and non-Markov search algorithms. Our results also bear on recent experimental data suggesting that heuristic repair algorithms can solve large constraint satisfaction problems easily, given a preprocessor that generates a sufficiently good initial state.
Temporal data base management Reasoning about time typically involves drawing conclusions on the basis of incomplete information. Uncertainty arises in the form of ignorance, indeterminacy, and indecision. Despite the lack of complete information, a problem solver is continually forced to make predictions in order to pursue hypotheses and plan for the future. Such predictions are frequently contravened by subsequent evidence. This paper presents a computational approach to temporal reasoning that directly confronts these issues. The approach centers around techniques for managing a data base of assertions corresponding to the occurrence of events and the persistence of their effects over time. The resulting computational framework performs the temporal analog of (static) reason maintenance by keeping track of dependency information involving assumptions about the truth of facts spanning various intervals of time. The system described in this paper extends classical predicate-calculus data bases, such as those used by PROLOG, to deal with time in an efficient and natural manner.
Macro-operators: a weak method for learning This article explores the idea of learning efficient strategies for solving problems by searching for macro-operators. A macro-operator, or macro for short, is simply a sequence of operators chosen from the primitive operators of a problem. The technique is particularly useful for problems with non-serializable subgoals, such as Rubik's Cube, for which other weak methods fail. Both a problem-solving program and a learning program are described in detail. The performance of these programs is analyzed in terms of the number of macros required to solve all problem instances, the length of the resulting solutions (expressed as the number of primitive moves), and the amount of time necessary to learn the macros. In addition, a theory of why the method works, and a characterization of the range of problems for which it is useful are presented. The theory introduces a new type of problem structure called operator decomposability. Finally, it is concluded that the macro technique is a new kind of weak method, a method for learning as opposed to problem solving.
CP-nets: a tool for representing and reasoning with conditional ceteris paribus preference statements Information about user preferences plays a key role in automated decision making. In many domains it is desirable to assess such preferences in a qualitative rather than quantitative way. In this paper, we propose a qualitative graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is often compact and arguably quite natural in many circumstances. We provide a formal semantics for this model, and describe how the structure of the network can be exploited in several inference tasks, such as determining whether one outcome dominates (is preferred to) another, ordering a set outcomes according to the preference relation, and constructing the best outcome subject to available evidence.
On the size of reactive plans One of the most widespread approaches to reactive planning is Schoppers' universal plans. We propose a stricter definition of universal plans which guarantees a weak notion of soundness not present in the original definition. Furthermore, we isolate three different types of completeness which capture different behaviours exhibited by universal plans. We show that universal plans which run in polynomial time and are of polynomial size cannot satisfy even the weakest type of completeness unless the polynomial hierarchy collapses. However, by relaxing either the polynomial time or the polynomial space requirement, the construction of universal plans satisfying the strongest type of completeness becomes trivial.
Solving Simple Planning Problems with More Inference and No Search Many benchmark domains in AI planning including Blocks, Logistics, Gripper, Satellite, and others lack the interactions that characterize puzzles and can be solved non-optimally in low polynomial time. They are indeed easy problems for people, although as with many other problems in AI, not always easy for machines. In this paper, we address the question of whether simple problems such as these can be solved in a simple way, i.e., without search, by means of a domain-independent planner. We address this question empirically by extending the constraint-based planner CPT with additional domain-independent inference mechanisms. We show then for the first time that these and several other benchmark domains can be solved with no backtracks while performing only polynomial node operations. This is a remarkable finding in our view that suggests that the classes of problems that are solvable without search may be actually much broader than the classes that have been identified so far by work in Tractable Planning.
Some Results on the Complexity of Planning with Incomplete Information Planning with incomplete information may mean a number ofdifferent things; that certain facts of the initial state are not known, thatoperators can have random or nondeterministic effects, or that the planscreated contain sensing operations and are branching. Study of the complexityof incomplete information planning has so far been concentratedon probabilistic domains, where a number of results have been found. Weexamine the complexity of planning in nondeterministic propositional...
Facets of the knapsack polytope Abstract A necessary and sufficient condition is given for an inequality with coefficients 0 or 1 to define a facet of the knapsack polytope, i.e., of the convex hull of 0–1 points satisfying a given linear inequality. A sufficient condition is also established for a larger class of inequalities (with coefficients not restricted to 0 and 1) to define a facet for the same polytope, and a procedure is given for generating all facets in the above two classes. The procedure can be viewed as a way of generating cutting planes for 0–1 programs.
Planned and traversable play-out: a flexible method for executing scenario-based programs We introduce a novel approach to the smart execution of scenario-based models of reactive systems, such as those resulting from the multi-modal inter-object language of live sequence charts (LSCs). Our approach finds multiple execution paths from a given state of the system, and allows the user to interactively traverse them. The method is based on translating the problem of finding a superstep of execution into a problem in the AI planning domain, and issuing a known planning algorithm, which we have had to modify and strengthen for our purposes.
Planning with Different Forms of Domain-Dependent Control Knowledge - An Answer Set Programming Approach In this paper we present a declarative approach to adding domain-dependent control knowledge for Answer Set Planning (ASP). Our approach allows different types of domain-dependent control knowledge such as hierarchical, temporal, or procedural knowledge to be represented and exploited in parallel, thus combining the ideas of control knowledge in HTN-planning, GOLOG-programming, and planning with temporal knowledge into ASP. To do so, we view domain-dependent control knowledge as sets of independent constraints. An advantage of this approach is that domain-dependent control knowledge can be modularly formalized and added to the planning problem as desired. We define a set of constructs for constraint representation and provide a set of domain-independent logic programming rules for checking constraint satisfaction.
Anatomical Structure Sketcher For Cephalograms By Bimodal Deep Learning The lateral cephalogram is a commonly used medium to acquire patient-specific morphology for diagnose and treatment planning in clinical dentistry. The robust anatomical structure detection and accurate annotation remain challenging considering the personal skeletal variations and image blurs caused by device-specific projection magnification, together with structure overlapping in the lateral cephalograms. We propose a novel cephalogram sketcher system, where the contour extraction of anatomical structures is formulated as a cross-modal morphology transfer from regular image patches to arbitrary curves. Specifically, the image patches of structures of interest are located by a hierarchical pictorial model. The automatic contour sketcher converts the image patch to a morphable boundary curve via a bimodal deep Boltzmann machine. The deep machine learns a joint representation of patch textures and contours, and forms a path from one modality (patches) to the other (contours). Thus, the sketcher can infer the contours by alternating Gibbs sampling along the path in a manner similar to the data completion. The proposed method is robust not only to structure detection, but also tends to produce accurate structure shapes and landmarks even in blurry X-ray images. The experiments performed on clinically captured cephalograms demonstrate the effectiveness of our method.
1.200126
0.200126
0.200126
0.100121
0.025052
0.011867
0.000138
0.000109
0.000078
0.000034
0.000003
0
0
0
Performance Evaluation of Traditional Caching Policies on a Large System with Petabytes of Data Caching is widely known to be an effective method for improving I/O performance by storing frequently used data on higher speed storage components. However, most existing studies that focus on caching performance evaluate fairly small files populating a relatively small cache. Few reports are available that detail the performance of traditional cache replacement policies on extremely large caches. Do such traditional caching policies still work effectively when applied to systems with petabytes of data? In this paper, we comprehensively evaluate the performance of several cache policies, which include First-In-First-Out (FIFO), Least Recently Used (LRU) and Least Frequently Used (LFU), on the global satellite imagery distribution application maintained by the U.S. Geological Survey (USGS) Earth Resources Observation and Science Center (EROS). Evidence is presented suggesting traditional caching policies are capable of providing performance gains when applied to large data sets as with smaller data sets. Our evaluation is based on approximately three million real-world satellite images download requests representing global user download behavior since October 2008.
Architecture Design of a Data Intensive Satellite Image Processing and Distribution System High-speed and cost-effective architecture design has become a great challenge for future dataintensive scalable high performance computing (HPC) systems. In this paper, we discuss the significant architecture design challenges of a large scale satellite images processing and distribution system maintained by the U.S. Geological Survey (USGS) Earth Resources Observation and Science Center (EROS). Specifically, we conduct a real world case study on 1) how can changing workload greatly affect existing system architecture; 2) how does EROS's new system architecture adapt to the significant workload changes; and 3) how to further optimize the performance of the current EROS system.
Practical buffer cache management scheme based on simple prefetching Many replacement and prefetching policies have recently been proposed for buffer cache management. However, many real operating systems, including GNU/Linux, generally use the simple least recently used (LRU) replacement policy with prefetching being employed in special situations such as when sequentiality is detected. In this paper, we propose the SA-W2R scheme that integrates buffer management and prefetching, where prefetching is done constantly in aggressive fashion. The scheme is simple to implement making it a feasible solution in real systems. In its basic form, for buffer replacement, it uses the LRU policy. However, its modular design allows for any replacement policy to be incorporated into the scheme. For prefetching, it uses the LRU-one block lookahead (LRU-OBL) approach, eliminating any extra burden that is generally necessary in other prefetching approaches. Implementation studies based on the GNU/Linux show that the SA-W2 R performs better than the GNU/Linux with a maximum increases of 23% for the workloads considered
An Automatic Prefetching And Caching System Steady improvements in storage capacities and CPU clock speeds intensify the performance bottleneck at the I/O subsystem of modern computers. Caching data can efficiently short circuit costly delays associated with disk accesses. Recent studies have shown that disk I/O performance gains provided by a cache buffer do not scale with cache size. Therefore, new algorithms have to be investigated to better utilize cache buffer space. Predictive prefetching and caching solutions have been shown to improve I/O performance in an efficient and scalable manner in simulation experiments. However, most predictive prefetching algorithms have not yet been implemented in real-world storage systems due to two main limitations: first, the existing prefetching solutions are unable to self regulate based on changing I/O workload; second, excessive number of unneeded blocks are prefetched. Combined, these drawbacks make predictive prefetching and caching a less attractive solution than the simple LRU management. To address these problems, in this paper we propose an automatic prefetching and caching system (or APACS for short), which mitigates all of these shortcomings through three unique techniques, namely: (1) dynamic cache partitioning, (2) prefetch pipelining, and (3) prefetch buffer management. APACS dynamically partitions the buffer cache memory, used for prefetched and cached blocks, by automatically changing buffer/cache sizes in accordance to global I/O performance. The adaptive partitioning scheme implemented in APACS optimizes cache hit ratios, which subsequently accelerates application execution speeds. Experimental results obtained from trace-driven simulations show that APACS outperforms the LRU cache management and existing prefetching algorithms by an average of over 50%.
The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms A fundamental challenge in improving file system performance is to design effective block replacement algorithms to minimize buffer cache misses. Despite the well-known interactions between prefetching and caching, almost all buffer cache replacement algorithms have been proposed and studied comparatively, without taking into account file system prefetching, which exists in all modern operating systems. This paper shows that such kernel prefetching can have a significant impact on the relative performance in terms of the number of actual disk I/Os of many well-known replacement algorithms; it can not only narrow the performance gap but also change the relative performance benefits of different algorithms. Moreover, since prefetching can increase the number of blocks clustered for each disk I/O and, hence, the time to complete the I/O, the reduction in the number of disk I/Os may not translate into proportional reduction in the total I/O time. These results demonstrate the importance of buffer caching research taking file system prefetching into consideration and comparing the actual disk I/Os and the execution time under different replacement algorithms.
Practical prefetching techniques for multiprocessor file systems Improvements in the processing speed of multiprocessors are outpacing improvements in the speed of disk hardware. Parallel disk I/O subsystems have been proposed as one way to close the gap between processor and disk speeds. In a previous paper we showed that prefetching and caching have thepotential to deliver the performance benefits of parallel file systems to parallel applications. In this paper we describe experiments withpractical prefetching policies that base decisions only on on-line reference history, and that can be implemented efficiently. We also test the ability of those policies across a range of architectural parameters.
An optimality proof of the LRU-K page replacement algorithm This paper analyzes a recently published algorithm for page replacement in hierarchical paged memory systems [O'Neil et al. 1993]. The algorithm is called the LRU-K method, and reduces to the well-known LRU (Least Recently Used) method for K = 1. Previous work [O'Neil et al. 1993; Weikum et al. 1994; Johnson and Shasha 1994] has shown the effectiveness for K > 1 by simulation, especially in the most common case of K = 2. The basic idea in LRU-K is to keep track of the times of the last K references to memory pages, and to use this statistical information to rank-order the pages as to their expected future behavior. Based on this the page replacement policy decision is made: which memory-resident page to replace when a newly accessed page must be read into memory. In the current paper, we prove, under the assumptions of the independent reference model, that LRU-K is optimal. Specifically we show: given the times of the (up to) K most recent references to each disk page, no other algorithm A making decisions to keep pages in a memory buffer holding n - 1 pages based on this infomation can improve on the expected number of I/Os to access pages over the LRU-K algorithm using a memory buffer holding n pages. The proof uses the Bayesian formula to relate the space of actual page probabilities of the model to the space of observable page numbers on which the replacement decision is acutally made.
DULO: an effective buffer cache management scheme to exploit both temporal and spatial locality Sequentiality of requested blocks on disks, or their spatial locality, is critical to the performance of disks, where the throughput of accesses to sequentially placed disk blocks can be an order of magnitude higher than that of accesses to randomly placed blocks. Unfortunately, spatial locality of cached blocks is largely ignored and only temporal locality is considered in system buffer cache management. Thus, disk performance for workloads without dominant sequential accesses can be seriously degraded. To address this problem, we propose a scheme called DULO (DUal LOcality), which exploits both temporal and spatial locality in buffer cache management. Leveraging the filtering effect of the buffer cache, DULO can influence the I/O request stream by making the requests passed to disk more sequential, significantly increasing the effectiveness of I/O scheduling and prefetching for disk performance improvements. DULO has been extensively evaluated by both trace-driven simulations and a prototype implementation in Linux 2.6.11. In the simulations and system measurements, various application workloads have been tested, including Web Server, TPC benchmarks, and scientific programs. Our experiments show that DULO can significantly increase system throughput and reduce program execution times.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Conformant Graphplan Planning under uncertainty is a difficult task. If sensory information is available, it is possible to do contingency planning - that is, develop plans where certain branches are executed conditionally, based on the outcome of sensory actions. However, even without sensory information, it is often possible to develop useful plans that succeed no matter which of the allowed states the world is actually in. We refer to this type of planning as conformant planning.Few conformant planners have been built, partly because conformant planning requires the ability to reason about disjunction. In this paper we describe Conformant Graphplan (CGP), a Graphplan-based planner that develops sound (non-contingent) plans when faced with uncertainty in the initial conditions and in the outcome of actions. The basic idea is to develop separate plan graphs for each possible world. This requires some subtle changes to both the graph expansion and solution extraction phases of Oraphplan. In particular, the solution extraction phase must consider the unexpected side effects of actions in other possible worlds, and must confront any undesirable effects.We show that COP performs significantly better than two previous (probabilistic) conformant planners.
Representing Beliefs in the Fluent Calculus Action formalisms like the fluent calculus have been developed to endow logic-based agents with the abilities to reason about the effects of actions, to execute high-level strategies, and to plan. In this paper we extend the fluent calculus by a method for belief change, which allows agents to revise their internal model upon making observations that contradict this model. Unlike the existing combination of the situation calculus with belief revision [16], our formalism satisfies all of the standard postulates for (iterated) belief change. Furthermore, we have extended the high-level action programming language FLUX by a computational approach to belief change which is provably equivalent to the axiomatic characterization in the fluent calculus.
Towards Self-Tuning Memory Management for Data Servers Although today's computers provide huge amounts of main memory, the ever-increasing load of large data servers, imposed by resource-intensive decision-support queries and accesses to multimedia and other complex data, often leads to memory contention and may result in severe performance degrada- tion. Therefore, careful tuning of memory mangement is crucial for heavy-load data servers. This paper gives an overview of self-tuning methods for a spectrum of memory management issues, ranging from traditional caching to exploiting distributed memory in a server cluster and speculative prefetching in a Web-based system. The common, fundamental elements in these methods include on-line load track- ing, near-future access prediction based on stochastic models and the available on-line statistics, and dynamic and automatic adjustment of control parameters in a feedback loop.
Small cache, big effect: provable load balancing for randomly partitioned cluster services Load balancing requests across a cluster of back-end servers is critical for avoiding performance bottlenecks and meeting service-level objectives (SLOs) in large-scale cloud computing services. This paper shows how a small, fast popularity-based front-end cache can ensure load balancing for an important class of such services; furthermore, we prove an O(n log n) lower-bound on the necessary cache size and show that this size depends only on the total number of back-end nodes n, not the number of items stored in the system. We validate our analysis through simulation and empirical results running a key-value storage system on an 85-node cluster.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.071111
0.066667
0.034444
0.016
0.002029
0.000247
0.000022
0.000002
0
0
0
0
0
0
File Clustering Based Replication Algorithm in a Grid Environment Replication in grid file systems can significantly improve I/O performance of data-intensive applications. However, most of existing replication techniques apply to individual files, which may introduce inefficient replication overheads for a large number of files. We propose a file clustering based replication algorithm for grid file systems. Our algorithm groups files according to a relationship of simultaneous accesses between files and stores replicas of the clustered files into storage nodes, to satisfy expected most of future read access times to the clustered files and replication times for individual files being minimized under the given storage capacity limitation. Our experiments on a given grid environment, 20 nodes of 5 sites, suggest that the proposed algorithm achieves accurate file clustering and efficient replica management; our clustering policy with the file cluster size limit of 5120 MB and the storage capacity limit for replicas of 10240 MB exhibits 1.58 times efficiency than the policy that never groups related files. The results also indicate that the overheads required for introducing our algorithm significantly affect I/O performance of running applications.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Hypothetical Reasoning About Actions: From Situation Calculus To Event Calculus Hypothetical reasoning about actions is the activity of preevaluating the effect of performing actions in a changing domain; this reasoning underlies applications of knowledge representation, such as planning and explanation generation. Action effects are often specified in the language of situation calculus, introduced by McCarthy and Hayes in 1969. More recently, the event calculus has been defined to describe actual actions, i.e., those that have occurred in the past, and their effects on the domain. Altough the two formalisms share the basic ontology of atomic actions and fluents, situation calculus cannot represent actual actions while event calculus cannot represent hypotethical actions. In this article, the language and the axioms of event calculus are extended to allow representing and reasoning about hypothetical actions, performed either at the present time or in the past, altough counterfactuals are not supported. Both event calculus and its extension are defined as logic programs so that theories are readily adaptable for Prolog query interpretation. For a reasonably large class of theories and queries, Prolog interpretation is shown to be sound and complete w.r.t. the main semantics for logic programs.
A Realistic Experiment in Knowledge Representation in Open Event Calculus: Protocol Specification This paper presents one of the first realistic experiments in the use of Event Calculusin Open Logic Programming: the specification of a process protocol. The specification taskinvolves most of the common complications of temporal reasoning: the representation of contextdependent actions, of preconditions and ramifications of actions, the modelling of systemfaults, and most of all, the representation of uncertainty of actions. As the underlying language,the Open Logic Programming...
E-RES: A System for Reasoning about Actions, Events and Observations E-RES is a system that implements the Language E, a logic for reasoning about narratives of action oc- currences and observations. E's semantics is model- theoretic, but this implementation is based on a sound and complete reformulation of E in terms of argumen- tation, and uses general computational techniques of argumentation frameworks. The system derives scep- tical non-monotonic consequences of a given reformu- lated theory which exactly correspond to consequences entailed by E's model-theory. The computation relies on a complimentary ability of the system to derive cred- ulous non-monotonic consequences together with a set of supporting assumptions which is sucient for the (credulous) conclusion to hold. E-RES allows theories to contain general action laws, statements about ac- tion occurrences, observations and statements of ram- ications (or universal laws). It is able to derive con- sequences both forward and backward in time. This paper gives a short overview of the theoretical basis of E-RES and illustrates its use on a variety of examples. Currently, E-RES is being extended so that the system can be used for planning.
Efficient Temporal Reasoning In The Cached Event Calculus This article deals with the problem of providing Kowalski and Sergot's event calculus, extended with context dependency, with an efficient implementation in a logic programming framework. Despite a widespread recognition that a positive solution to efficiency issues is necessary to guarantee the computational feasibility of existing approaches to temporal reasoning, the problem of analyzing the complexity of temporal reasoning programs has been largely overlooked. This article provides a mathematical analysis of the efficiency of query and update processing in the event calculus and defines a cached version of the calculus that (i) moves computational complexity from query to update processing and (ii) features an absolute improvement of performance, because query processing in the event calculus costs much more than update processing in the proposed cached version.
Extending Conventional Planning Techniques to Handle Actions with Context-Depen dent Effects
Occurrences and Narratives as Constraints in the Branching Structure of the Situation Calculus. The Situation Calculus is a logic of time and change in which there is a distinguished initial situation, and all other situations arise from the different sequences of actions that might be performed starting in the initial one. Within this framework, it is difficult to incorporate the notion of an occurrence, since all situations after the initial one are hypothetical. These occurrences are impo...
Computing Circumscription Revisited: A Reduction Algorithm In recent years, a great deal of attention has been devoted to logics of common-sense reasoning. Among the candidates proposed, circumscription has been perceived as an elegant mathematical technique for modeling nonmonotonic reasoning, but difficult to apply in practice. The major reason for this is the second-order nature of circumscription axioms and the difficulty in finding proper substitutions of predicate expressions for predicate variables. One solution to this problem is to compile, where possible, second-order formulas into equivalent first-order formulas. Although some progress has been made using this approach, the results are not as strong as one might desire and they are isolated in nature. In this article, we provide a general method that can be used in an algorithmic manner to reduce certain circumscription axioms to first-order formulas. The algorithm takes as input an arbitrary second-order formula and either returns as output an equivalent first-order formula, or terminates with failure. The class of second-order formulas, and analogously the class of circumscriptive theories that can be reduced, provably subsumes those covered by existing results. We demonstrate the generality of the algorithm using circumscriptive theories with mixed quantifiers (some involving Skolemization), variable constants, nonseparated formulas, and formulas with n-ary predicate variables. In addition, we analyze the strength of the algorithm, compare it with existing approaches, and provide formal subsumption results.
Representing actions in logic programming and its applications in database updates
The complexity of stochastic games We consider the complexity of stochastic games—simple games of chance played by two players. We show that the problem of deciding which player has the greatest chance of winning the game is in the class NP ⌢ co- NP .
The Downward Refinement Property Using abstraction in planning does not guarantee an im­ provement in search efficiency; it is possible for an ab- stract planner to display worse performance than one that does not use abstraction. Analysis and experiments have shown that good abstraction hierarchies have, or are close to having, the downward refinement property, whereby, given that a concrete-level solution exists, every abstract solution can be refined to a concrete-level solu­ tion without backtracking across abstract levels. Work­ ing within a semantics for ABSTRIPS-style abstraction we provide a characterizati on of the downward refinement property. After discussing its effect on search efficiency, we develop a semantic condition sufficient for guarantee­ ing its presence in an abstraction hierarchy. Using the semantic condition, we then provide a set of sufficient and polynomial-time checkable syntactic conditions that can be used for checking a hierarchy for the downward refinement property,
Probabilistic reasoning about actions in nonmonotonic causal theories We present the language PC+ for probabilistic reasoning about actions, which is a generalization of the action language C+ that allows to deal with probabilistic as well as nondeterministic effects of actions. We define a formal semantics of PC+ in terms of probabilistic transitions between sets of states. Using a concept of a history and its belief state, we then show how several important problems in reasoning about actions can be concisely formulated in our formalism.
An algorithm for concurrency control and recovery in replicated distributed databases In a one-copy distributed database, each data item is stored at exactly one site. In a replicated database, some data items may be stored at multiple sites. The main motivation is improved reliability: by storing important data at multiple sites, the DBS can operate even though some sites have failed.This paper describes an algorithm for handling replicated data, which allows users to operate on data so long as one copy is “available.” A copy is “available” when (i) its site is up, and (ii) the copy is not out-of-date because of an earlier crash.The algorithm handles clean, detectable site failures, but not Byzantine failures or network partitions.
Database-aware semantically-smart storage Years of innovation in file systems have been highly successful in improving their performance and functionality, but at the cost of complicating their interaction with the disk. A variety of techniques exist to ensure consistency and integrity of file ...
Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture.
1.029911
0.029479
0.029479
0.021026
0.014429
0.008566
0.004904
0.001722
0.000121
0.000016
0
0
0
0
An Adaptive Cache Coherence Protocol Specification for Parallel Input/Output Systems Abstract--Caching has been intensively used in memory and traditional file systems to improve system performance. However, the use of caching in parallel file systems and I/O libraries has been limited to I/O nodes to avoid cache coherence problems. In this paper, we specify an adaptive cache coherence protocol very suitable for parallel file systems and parallel I/O libraries. This model exploits the use of caching, both at processing and I/O nodes, providing performance increase mechanisms as aggressive prefetching and delayed-write techniques. The cache coherence problem is solved by using a dynamic scheme of cache coherence protocols with different sizes and shapes of granularity. The proposed model is very appropriate for parallel I/O interfaces, as MPI-IO. Performance results, obtained on an IBM SP2, are presented to demonstrate the advantages offered by the cache management methods proposed.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
On optimal degree selection for polynomial kernel with support vector machines: Theoretical and empirical investigations The key challenge in kernel based learning algorithms is the choice of an appropriate kernel and its optimal parameters. Selecting the optimal degree of a polynomial kernel is critical to ensure good generalisation of the resulting support vector machine model. In this paper we propose Bayesian and Laplace approximation methods to estimate the polynomial degree. A rule based meta-learning approach is then proposed for automatic polynomial kernel and its optimal degree selection. The new approach is constructed and tested on different sizes of 112 datasets with binary class as well as multi class classification problems. An extensive computational evaluation of these methods is conducted, and rules are generated to determine when these approximation methods are appropriate.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Exploring Gate-Limited Analytical Models for High Performance Network Storage Servers
Parameterized complexity for the database theorist
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Easy-to-use object-oriented parallel processing with Mentat Mentat, an object-oriented parallel processing system designed to directly address the difficulty of developing architecture-independent parallel programs, is discussed. The Mentat system consists of two components: the Mentat programming language and the Mentat runtime system. The Mentat programming language, which is based on C++, is described. Performance results from implementing the Mentat runtime system on a network of Sun 3 and 4 workstations, the Silicon Graphics Iris, the Intel iPSC/2, and the Intel iPSC/860 are presented.<>
Parallel objects migration: A fine grained approach to load distribution Migration is a fundamental mechanism for achieving load balancing and locality of references in parallel and distributed applications. This paper presents the migration mechanisms implemented in the Parallel Objects (PO) programming environment, which assumes a fine granularity in allocation and reallocation of objects. In fact, a PO object can dynamically distribute its components onto several nodes depending on its dynamic need for resources, and the migration mechanisms implemented in PO allow object components to migrate independently of each other. This paper describes how the PO environment can exploit the migration mechanisms via an embedded load-balancing policy, possibly driven by user-defined allocation hints, and evaluates the effectiveness of the approach in several application examples.
Object-oriented parallel processing with Mentat Mentat is an object-oriented parallel processing system designed to address three problems that face the parallel computing community, the difficulty of writing parallel programs, the difficulty achieving portability of those programs, and the difficulty exploiting contemporary heterogeneous environments. Writing parallel programs by hand is more difficult than writing sequential programs. The programmer must manage communication, synchronization, and scheduling of tens to thousands of independent processes. The burden of correctly managing the environment often overwhelms programmers, and requires a considerable investment of time and energy. If parallel computing is to become mainstream it must be made easier for the average programmer. Otherwise, parallel computing will remain relegated to specialized applications of high value where the human investment required to parallelize the application can be justified. A second problem is that once a code has been implemented on a particular MIMD architecture, it is often not readily portable to other platforms; the tools, techniques, and library facilities used to parallelize the application may be specific to a particular platform. Thus, considerable effort must be re-invested to port the application to a new architecture. Given the plethora of new architectures and the rapid obsolescence of existing architectures, this represents a continuing time investment. One can view the different platforms as one dimension of a two dimensional space, where the other dimension is time. One would like the implementation to be able to cover a large area in this space in order to amortize the development costs. Finally there is heterogeneity. Today's high performance computation environments have a great deal of heterogeneity . Many users have a wide variety of resources available, traditional vector supercomputers, parallel supercomputers, and different high performance workstations. The machines may be connected together with a high speed local connection such as FDDI, ATM, or HIPPI, or they may be geographicall y distributed. Taken together these machines represent a tremendous aggregate computation resource. Mentat was originally designed to address the first two of these issues, implementation difficulty and portability. The primary Mentat design objectives were to provide 1) easy-to-use parallelism, 2) high performance via parallel execution, 3) system scalability from tens to hundreds of processors, and 4) applications portability across a wide range of platforms. The premise underlying Mentat is that writing parallel programs need not be more difficult than writing sequential programs. Instead, it is the lack of appropriate abstractions that has kept parallel architectures difficult to program, and hence, inaccessible to mainstream, production system 1. This work is partially funded by NSF grants ASC-9201822 and CDA-8922545-01, National Laboratory of
Microprocessor file system interfaces Increasingly, file systems for multiprocessors are designed with parallel access to multiple disks, to keep I/O from becoming a serious bottleneck for parallel applications. Although file system software can transparently provide high-performance access to parallel disks, a new file system interface is needed to facilitate parallel access to a file from a parallel application. We describe the difficulties faced when using the conventional (Unix-like) interface in parallel applications, and then outline ways to extend the conventional interface to provide convenient access to the file for parallel programs, while retaining the traditional interface for programs that have no need for explicitly parallel file access. Our interface includes a single naming scheme, a multiopen operation, local and global file pointers, mapped file pointers, logical records, multi-files, and logical coercion for backward compatibility.
Scheduler activations: effective kernel support for the user-level management of parallelism Threads are the vehicle. for concurrency in many approaches to parallel programming. Threads separate the notion of a sequential execution stream from the other aspects of tradi- tional UNIX-like processes, such as address spaces and 1/0 descriptors. The objective of this separation is to make the expression and control of parallelism sufficiently cheap that the programmer or compiler can exploit even fine-grained parallelism with acceptable overhead. Threads can be supported either by the operating system kernel or by user-level library code in the application ad- dress space, but neither approach has been fully satisfactory. This paper addresses this dilemma. First, we argue that the performance of kernel threads is inherently worse than that of user-level threads, rather than this being an artifact of existing implementations; we thus argue that managing par- allelism at the user level is essential to high-performance parallel computing. Next, we argue that the lack of system integration exhibited by user-level threads is a consequence of the lack of kernel support for user-level threads provided by contemporary multiprocessor operating systems; we thus argue that kernel threads or processes, as currently con- ceived, are the wrong abstraction on which to support user- level management of parallelism. Finally, we describe the design, implementation, and performance of a new kernel in- terface and user-level thread package that together provide the same functionality as kernel threads without compromis- ing the performance and flexibility advantages of user-level management of parallelism.
A Stable Distributed Scheduling Algorithm
Towards Self-Tuning Memory Management for Data Servers Although today's computers provide huge amounts of main memory, the ever-increasing load of large data servers, imposed by resource-intensive decision-support queries and accesses to multimedia and other complex data, often leads to memory contention and may result in severe performance degrada- tion. Therefore, careful tuning of memory mangement is crucial for heavy-load data servers. This paper gives an overview of self-tuning methods for a spectrum of memory management issues, ranging from traditional caching to exploiting distributed memory in a server cluster and speculative prefetching in a Web-based system. The common, fundamental elements in these methods include on-line load track- ing, near-future access prediction based on stochastic models and the available on-line statistics, and dynamic and automatic adjustment of control parameters in a feedback loop.
Lightweight recoverable virtual memory Recoverable virtual memoryrefers to regions of a virtual address space on which transactional guarantees are offered. This article describes RVM, an efficient, portable, and easily used implementation of recoverable virtual memory for Unix environments. A unique characteristic of RVM is that it allows independent control over the transactional properties of atomicity, permanence, and serializability. This leads to considerable flexibility in the use of RVM, potentially enlarging the range of applications that can benefit from transactions. It also simplifies the layering of functionality such as nesting and distribution. The article shows that RVM performs well over its intended range of usage even though it does not benefit from specialized operating system support. It also demonstrates the importance of intra- and inter-transaction optimizations.
Analysis of disk arm movement for large sequential reads The common model for analyzing seek distances on a magnetic disk uses a continuous approximation in which the range of motion of the disk arm is the interval [0,1]. In this model, both the current location of the disk arm and the location of the next request are assumed to be points uniformly distributed on the interval [0,1] and therefore the expected seek distance to service the next request is 1/3. In many types of databases including scientific, object oriented, and multimedia database systems, a disk service request may involve fetching very large objects which must be transferred from the disk without interruption. In this paper we show that the common model does not accurately reflect disk arm movement in such cases as both the assumption of uniformity and the range of motion of the disk arm may depend on the size of the objects. We propose a more accurate model that takes into consideration the distribution of the sizes of the objects fetched as well as the disk arm scheduling policy. We provide closed form expressions for the expected seek distance in this model under various assumptions on the distribution of object sizes and the capability of the disk arm to read in both directions and to correct its position before the next read is performed.
The Nas Parallel Benchmarks. A new set of benchmarks has been developed for the performance evaluation of highly parallel supercom puters. These consist of five \"parallel kernel\" bench marks and three \"simulated application\" benchmarks. Together they mimic the computation and data move ment characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their \"pencil and paper\" specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional bench- marking approaches on highly parallel systems are avoided.
MC2: Multiple Clients on a Multilevel Cache In today's networked storage environment, it is common to have a hierarchy of caches where the lower levels of the hierarchy are accessed by multiple clients. This sharing can have both positive or negative effects. While data fetched by one client can be used by another client without incurring additional delays, clients competing for cache buffers can evict each other's blocks and interfere with exclusive caching schemes. Our algorithm, MC2, combines local, per client management with a global, system-wide, scheme, to emphasize the positive effects of sharing and reduce the negative ones. The local scheme uses readily available information about the client's future access profile to save the most valuable blocks, and to choose the best replacement policy for them. The global scheme uses the same information to divide the shared cache space between clients, and to manage this space. Exclusive caching is maintained for non-shared data and is disabled when sharing is identified. Our simulation results show that the combined algorithm significantly reduces the overall I/O response times of the system.
A Response Time Distribution Model for Zoned RAID RAID systems are widely deployed, both as standalone storage solutions and as the building blocks of modern virtualised storage platforms. An accurate model of RAID system performance is therefore critical to understanding storage system performance. To this end, this paper presents a queueing network-based model of RAID systems comprised of zoned disks and operating at RAID level 0-1 or 5. The contribution over previous work is twofold. Firstly, our analysis approximates full I/O request response time distributions rather than just mean values. This provides the ability to reason about response time quantiles and higher moments of response time --- both of which are useful in the context of modern quality of service requirements. Secondly, we validate our model against measurements from a real RAID system rather than a software simulation. The close agreement between predicted and observed response time distributions gives a high level of confidence in the validity of our model.
Connections between the complexity of unique satisfiability and the threshold behavior of randomized reductions The present research is motivated by new results on the complexity of the unique satisfiability problem (USAT). Some new results are obtained, using the concept of randomized reductions. The proofs use only the fact that USAT is complete for DP under randomized reductions, even though the probability bound of these reductions may be low. Furthermore, the results show that the structural complexities of USAT and DP many-one complete sets are very similar, lending support to the argument that even sets complete under `weak' randomized reductions can capture the properties of the many-one complete sets. The authors generalize these results for the Boolean hierarchy and give upper and lower bounds on the thresholds for these classes
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.036064
0.038095
0.032443
0.007143
0.003599
0.000354
0.000112
0.000048
0.000013
0.000001
0
0
0
0
A Goal-Oriented Approach to Computing Well Founded Semantics
A goal-oriented approach to computing the well-founded semantics Global SLS resolution is an ideal procedural semantics for the well-founded semantics. We present a more effective variant of global SLS resolution, called XOLDTNF resolution, which incorporates simple mechanisms for loop detection and handling. Termination is guaranteed for all programs with the bounded-term-size property. We establish the soundness and (search space) completeness of XOLDTNF resolution. An implementation of XOLDTNF resolution in Prolog is available via FTP.
A Deductive Database Approach to Planning in Uncertain Environments We present a formal model for reasoning about probabilistic information in STRIPS style planning. We then show that all probabilistic planning problems expressible in this model may be represented as equivalent probabilistic logic programs, yielding a sound and complete method for finding such plans.
Logic Programming and Reasoning with Incomplete Information The purpose of this paper is to expand the syntax and semanticsof logic programs and disjunctive databases to allow for the correctrepresentation of incomplete information in the presence of multipleextensions. The language of logic programs with classical negation,epistemic disjunction, and negation by failure is further expanded bynew modal operators K and M (where for the set of rules T and formulaF , KF stands for &quot;F is known to be true by a reasoner with a set ofpremises T &quot; and MF ...
Miracles in formal theories of action Most work on reasoning about action is based on the implicit assumption that there are no events happening in the world concurrently with the actions that are being carried out. We discuss the possibility of relaxing this assumption and treating it as a default principle—if it is inconsistent with the given facts, then we will admit the possibility of unknown events, “miracles,” that, along with the given actions, contribute to the properties of the new situation. The formalism proposed by one of the authors in the paper, Formal Theories of Action, does not treat “miracles” properly. We discuss a modification of that approach which corrects this problem.
The anomalous extension problem in default reasoning In their recent celebrated paper, Hanks and McDermott presented a simple problem in temporal reasoning which showed that a seemingly natural representation of a frame axiom in nonmonotonic logic can give rise to an anomalous extension, i.e., one which is counter-intuitive in that it does not appear to be supported by the known facts.
Applications of circumscription to formalizing common-sense knowledge Abstract We present a new and more symmetric version of the circumscription method of nonmonotonic reasoning rst described in (McCarthy 1980) and some applications to formalizing common,sense knowledge. The applications in this paper are mostly based on minimizing the abnormality of dieren t aspects of various entities. Included are nonmonotonic treatments of is-a hierarchies, the unique names hypothesis, and the frame problem. The new circumscription may be called formula circumscription to distinguish it from the previously dened domain circumscription and predicate circumscription. A still more general formalism called prioritized circumscription is briey explored.
Nested abnormality theories Abstract: We propose a new approach to the use of circumscription for representingknowledge. Nested abnormality theories are similar to simple abnormality theoriesintroduced by McCarthy, except that their axioms may have a nested structure,with each level corresponding to another application of the circumscriptionoperator. The new style of applying circumscription sometimes leads to moreeconomical and elegant formalizations. Mathematical properties of nested abnormalitytheories may be easier...
Expressing default abduction problems as quantified Boolean formulas Abduction is the process of finding explanations for observed phenomena in accord to known laws about a given application domain. This form of reasoning is an important principle of common-sense reasoning and is particularly relevant in conjunction with nonmonotonic knowledge representation formalisms. In this paper, we deal with a model for abduction in which the domain knowledge is represented in terms of a default theory. We show how the main reasoning tasks associated with this particular form of abduction can be axiomatised within the language of quantified Boolean logic. More specifically, we provide polynomial-time constructible reductions mapping a given abduction problem into a quantified Boolean formula (QBF) such that the satisfying truth assignments to the free variables of the latter determine the solutions of the original problem. Since there are now efficient QBF-solvers available, this reduction technique yields a straightforward method to implement the discussed abduction tasks. We describe a realisation of this approach by appeal to the reasoning system QUIP.
Diagnostic reasoning with A-Prolog In this paper, we suggest an architecture for a software agent which operates a physical device and is capable of making observations and of testing and repairing the device's components. We present simplified definitions of the notions of symptom, candidate diagnosis, and diagnosis which are based on the theory of action language ${\cal AL}$. The definitions allow one to give a simple account of the agent's behavior in which many of the agent's tasks are reduced to computing stable models of logic programs.
The complexity of evaluating relational queries We prove a sequence of results which characterize exactly the complexity of problems related to the evaluation of relational queries consisting of projections and natural joins. We show that testing whether the result of a given query on a given relation equals some other given relation is Dp complete (Dp is a class which includes both NP and co-NP, and was recently introduced in a totally different context [13]). We show that testing inclusion or equivalence of queries with respect to a fixed relation (or of relations with respect to a fixed query) is Π2p-complete. We also examine the complexity of estimating the number of tuples of the answer.
Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM 2008, Napa Valley, California, USA, October 26-30, 2008
On Complexity of Counting Let l, u: , l. We give a full characterization of intervals [l, u] such that a polynomial-time ATM of a constant numer of alternations can verify the number of words of a given length and in a given (as its oracle) set A, provided that A's density function is in [l, u]. We prove also a new lower bound on the approximate counting: there is a recursive set A whose elements cannot be approximate counted in 2 p, A 2 p, A .
Unsupervised (Parameter) Learning For Mrfs On Bipartite Graphs We consider unsupervised (parameter) learning for general Markov random fields on bipartite graphs. This model class includes Restricted Boltzmann Machines. We show that besides the widely used stochastic gradient approximation (a.k.a. Persistent Contrastive Divergence) there is an alternative learning approach - a modified EM algorithm which is tractable because of the bipartiteness of the model graph. We compare the resulting double loop algorithm and the PCD learning experimentally and show that the former converges faster and more stable than the latter.
1.042097
0.024158
0.021443
0.010147
0.008063
0.003437
0.000723
0.000081
0.000023
0.000008
0
0
0
0
Analysis of simple randomized buffer management for parallel I/O Buffer management for a D-disk parallel I/O system is considered in the context of randomized placement of data on the disks. A simple prefetching and caching algorithm PHASE-LRU using bounded lookahead is described and analyzed. It is shown that PHASE-LRU performs an expected number of I/Os that is within a factor Θ(log D /log log D) of the number performed by an optimal off-line algorithm. In contrast, any deterministic buffer management algorithm with the same amount of lookahead must do at least Ω (√D) times the number of I/Os of the optimal.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Solving graph data issues using a layered architecture approach with applications to web spam detection. This paper proposes the combination of two state-of-the-art algorithms for processing graph input data, viz., the probabilistic mapping graph self organizing map, an unsupervised learning approach, and the graph neural network, a supervised learning approach. We organize these two algorithms in a cascade architecture containing a probabilistic mapping graph self organizing map, and a graph neural network. We show that this combined approach helps us to limit the long-term dependency problem that exists when training the graph neural network resulting in an overall improvement in performance. This is demonstrated in an application to a benchmark problem requiring the detection of spam in a relatively large set of web sites. It is found that the proposed method produces results which reach the state of the art when compared with some of the best results obtained by others using quite different approaches. A particular strength of our method is its applicability towards any input domain which can be represented as a graph.
Why Does Unsupervised Pre-training Help Deep Learning? Much recent research has been devoted to learning algorithms for deep architectures such as Deep Belief Networks and stacks of auto-encoder variants, with impressive results obtained in several areas, mostly on vision and language data sets. The best results obtained on supervised learning tasks involve an unsupervised learning component, usually in an unsupervised pre-training phase. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. The main quest ion investigated here is the following: how does unsupervised pre-training work? Answering this questions is important if learning in deep architectures is to be further improved. We propose several explanatory hypotheses and test them through extensive simulations. We empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples. The experiments confirm and clarify the advantage of unsupervised pre-training. The results suggest that unsupervised pre- training guides the learning towards basins of attraction o f minima that support better generalization from the training data set; the evidence from these results s upports a regularization explanation for the effect of pre-training.
Learning Deep Architectures for AI Theoretical results suggest that in order to learn the kind of com-plicated functions that can represent high-level abstractions (e.g., invision, language, and other AI-level tasks), one may needdeep architec-tures. Deep architectures are composed of multiple levels of non-linearoperations, such as in neural nets with many hidden layers or in com-plicated propositional formulae re-using many sub-formulae. Searchingthe parameter space of deep architectures is a difficult task, but learningalgorithms such as those for Deep Belief Networks have recently beenproposed to tackle this problem with notable success, beating the state-of-the-art in certain areas. This monograph discusses the motivationsand principles regarding learning algorithms for deep architectures, inparticular those exploiting as building blocks unsupervised learning ofsingle-layer models such as Restricted Boltzmann Machines, used toconstruct deeper models such as Deep Belief Networks.
Extended stable semantics for normal and disjunctive programs
The nature of statistical learning theory~. First Page of the Article
A machine program for theorem-proving The programming of a proof procedure is discussed in connection with trial runs and possible improvements.
An Introduction to Least Commitment Planning Recent developments have clarified the process of generating partially ordered, partially specified sequences of actions whose execution will achieve an agent's goal. This article summarizes a progression of least commitment planners, starting with one that handles the simple STRIPS representation and ending with UCOPOP a planner that manages actions with disjunctive precondition, conditional effects, and universal quantification over dynamic universes. Along the way, I explain how Chapman's formulation of the modal truth criterion is misleading and why his NP-completeness result for reasoning about plans with conditional effects does not apply to UCOPOP.
Equilibria and steering laws for planar formations This paper presents a Lie group setting for the problem of control of formations, as a natural outcome of the analysis of a planar two-vehicle formation control law. The vehicle trajectories are described using the planar Frenet–Serret equations of motion, which capture the evolution of both the vehicle position and orientation for unit-speed motion subject to curvature (steering) control. The set of all possible (relative) equilibria for arbitrary G-invariant curvature controls is described (where G=SE(2) is a symmetry group for the control law), and a global convergence result for the two-vehicle control law is proved. An n-vehicle generalization of the two-vehicle control law is also presented, and the corresponding (relative) equilibria for the n-vehicle problem are characterized. Work is on-going to discover stability and convergence results for the n-vehicle problem.
Pushing the envelope: planning, propositional logic, and stochastic search Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning problems many times faster than the best current planning systems. Although stochastic methods have been shown to be very effective on a wide range of scheduling problems, this is the first demonstration of its power on truly challenging classical planning instances. This work also provides a new perspective on representational issues in planning.
Simultaneous Localization And Mapping With Sparse Extended Information Filters In this paper we describe a scalable algorithm for the simultaneous mapping and localization (SLAM) problem. SLAM is the problem of acquiring a map of a static environment with a mobile robot. The vast majority of SLAM algorithms are based on the extended Kahnan filter (EKF), In this paper we advocate an algorithm that relies on the dual of the EKE the extended information filter (EIF). We show that when represented in the information form, map posteriors are dominated by a small number of links that tie together nearby features in the map. This insight is developed into a sparse variant of the EIF called the sparse extended information filter (SEIF). SEIFs represent maps by graphical networks of features that are locally interconnected, where links represent relative information between pairs of nearby,features, as well as information about the robot's pose relative to the map. We show that all essential update equations in SEIFs can be executed in constant time, irrespective of the size of the map. We also provide empirical results obtained for a benchmark data set collected in an outdoor environment, and using a multi-robot mapping simulation.
A logic programming approach to knowledge-state planning: Semantics and complexity We propose a new declarative planning language, called K, which is based on principles and methods of logic programming. In this language, transitions between states of knowledge can be described, rather than transitions between completely described states of the world, which makes the language well suited for planning under incomplete knowledge. Furthermore, our formalism enables the use of default principles in the planning process by supporting negation as failure. Nonetheless, K also supports the representation of transitions between states of the world (i.e., states of complete knowledge) as a special case, which shows that the language is very flexible. As we demonstrate on particular examples, the use of knowledge states may allow for a natural and compact problem representation. We then provide a thorough analysis of the computational complexity of K, and consider different planning problems, including standard planning and secure planning (also known as conformant planning) problems. We show that these problems have different complexities under various restrictions, ranging from NP to NEXPTIME in the propositional case. Our results form the theoretical basis for the DLVk system, which implements the language K on top of the DLV logic programming system.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.001274
0.000098
0
0
0
0
0
0
0
0
0
0
0
The Complexity of Reasoning for Fragments of Autoepistemic Logic. Autoepistemic logic extends propositional logic by the modal operator L. A formula &phiv; that is preceded by an L is said to be “believed.” The logic was introduced by Moore in 1985 for modeling an ideally rational agent’s behavior and reasoning about his own beliefs. In this article we analyze all Boolean fragments of autoepistemic logic with respect to the computational complexity of the three most common decision problems expansion existence, brave reasoning and cautious reasoning. As a second contribution we classify the computational complexity of checking that a given set of formulae characterizes a stable expansion and that of counting the number of stable expansions of a given knowledge base. We improve the best known Δ2p-upper bound on the former problem to completeness for the second level of the Boolean hierarchy. To the best of our knowledge, this is the first paper analyzing counting problem for autoepistemic logic.
Bounded Nondeterminism and Alternation in Parameterized Complexity Theory We give machine characterisations and logical descriptions of a number of parameterized complexity classes. The focus of our attention is the class W[P], which we characterise as the class of all parameterized problems decidable by a nondeterministic fixed-parameter tractable algorithm whose use of nondeterminism is bounded in terms of the parameter We give similar characterisations for AW[P], the "alternating version of W[P]", and various other parameterized complexity classes. We also give logical characterisations of the classes W[P] and AW[P] in terms of fragments of least fixed-point logic, thereby putting these two classes into a uniform framework that we have developed in earlier work. Furthermore, we investigate the relation between alternation and space in parameterized complexity theory. In this context, we prove that the COMPACT TURING MACHINE COMPUTATION problem, shown to be hard for the class AW[SAT] in [1], is complete for the class uniform-XNL.
Fixed-Parameter Algorithms For Artificial Intelligence, Constraint Satisfaction and Database Problems We survey the parameterized complexity of problems that arise in artificial intelligence, database theory and automated reasoning. In particular, we consider various parameterizations of the constraint satisfaction problem, the evaluation problem of Boolean conjunctive database queries and the propositional satisfiability problem. Furthermore, we survey parameterized algorithms for problems arising in the context of the stable model semantics of logic programs, for a number of other problems of non-monotonic reasoning, and for the computation of cores in data exchange.
Logic programs with stable model semantics as a constraint programming paradigm Logic programming with the stable model semantics is put forward as a novel constraint programming paradigm. This paradigm is interesting because it bring advantages of logic programming based knowledge representation techniques to constraint programming and because implementation methods for the stable model semantics for ground (variable&dash;free) programs have advanced significantly in recent years. For a program with variables these methods need a grounding procedure for generating a variable&dash;free program. As a practical approach to handling the grounding problem a subclass of logic programs, domain restricted programs, is proposed. This subclass enables efficient grounding procedures and serves as a basis for integrating built&dash;in predicates and functions often needed in applications. It is shown that the novel paradigm embeds classical logical satisfiability and standard (finite domain) constraint satisfaction problems but seems to provide a more expressive framework from a knowledge representation point of view. The first steps towards a programming methodology for the new paradigm are taken by presenting solutions to standard constraint satisfaction problems, combinatorial graph problems and planning problems. An efficient implementation of the paradigm based on domain restricted programs has been developed. This is an extension of a previous implementation of the stable model semantics, the Smodels system, and is publicly available. It contains, e.g., built&dash;in integer arithmetic integrated to stable model computation. The implementation is described briefly and some test results illustrating the current level of performance are reported.
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days.
Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays A technique for partitioning and mapping algorithms into VLSI systolic arrays is presented in this paper. Algorithm partitioning is essential when the size of a computational problem is larger than the size of the VLSI array intended for that problem. Computational models are introduced for systolic arrays and iterative algorithms. First, we discuss the mapping of algorithms into arbitrarily large size VLSI arrays. This mapping is based on the idea of algorithm transformations. Then, we present an approach to algorithm partitioning which is also based on algorithm transformations. Our approach to the partitioning problem is to divide the algorithm index set into bands and to map these bands into the processor space. The partitioning and mapping technique developed throughout the paper is summarized as a six step procedure. A computer program implementing this procedure was developed and some results obtained with this program are presented.
Indexing By Latent Semantic Analysis
Disk Shadowing Disk shadowing is a technique for maintaining a set of two or more identical disk images on separate disk devices. Its primary purpose is to enhance reliability and availability of secondary storage by providing multiple paths to redundant data. However, shadowing can also boost I/O performance. In this paper, we contend that intelligent device scheduling of shadowed discs increases the I/O rate by allowing parallel reads and by substantially reducing the average seek time for random reads. In particular, we develop and analytic model which shows that the seek time for a random read in a shadow set is a monotonic decreasing function of the number of disks.
Downward Separation Fails Catastrophically for Limited Nondeterminism Classes The $\beta$ hierarchy consists of classes $\beta_k={\rm NP}[logkn]\subseteq {\rm NP}$. Unlike collapses in the polynomial hierarchy and the Boolean hierarchy, collapses in the $\beta$ hierarchy do not seem to translate up, nor does closure under complement seem to cause the hierarchy to collapse. For any consistent set of collapses and separations of levels of the hierarchy that respects ${\rm P} = \beta_1\subseteq \beta_2\subseteq \cdots \subseteq {\rm NP}$, we can construct an oracle relative to which those collapses and separations hold; at the same time we can make distinct levels of the hierarchy closed under computation or not, as we wish. To give two relatively tame examples: for any $k \geq 1$, we construct an oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} \neq \beta_{k+2} \neq \cdots \] and another oracle relative to which \[ {\rm P} = \beta_{k} \neq \beta_{k+1} = {\rm PSPACE}. \] We also construct an oracle relative to which $\beta_{2k} = \beta_{2k+1} \neq \beta_{2k+2}$ for all k.
Diagnostic reasoning with A-Prolog In this paper, we suggest an architecture for a software agent which operates a physical device and is capable of making observations and of testing and repairing the device's components. We present simplified definitions of the notions of symptom, candidate diagnosis, and diagnosis which are based on the theory of action language ${\cal AL}$. The definitions allow one to give a simple account of the agent's behavior in which many of the agent's tasks are reduced to computing stable models of logic programs.
iSAM: Incremental Smoothing and Mapping In this paper, we present incremental smoothing and mapping (iSAM), which is a novel approach to the simultaneous localization and mapping problem that is based on fast incremental matrix factorization. iSAM provides an efficient and exact solution by updating a QR factorization of the naturally sparse smoothing information matrix, thereby recalculating only those matrix entries that actually change. iSAM is efficient even for robot trajectories with many loops as it avoids unnecessary fill-in in the factor matrix by periodic variable reordering. Also, to enable data association in real time, we provide efficient algorithms to access the estimation uncertainties of interest based on the factored information matrix. We systematically evaluate the different components of iSAM as well as the overall algorithm using various simulated and real-world datasets for both landmark and pose-only settings.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.2
0.066667
0.02
0.003774
0
0
0
0
0
0
0
0
0
0
Enhanced Error Correction Algorithm for RBF Neural Networks.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Beyond NP: Arc-Consistency for Quantified Constraints The generalization of the satisfiability problem with arbitrary quantifiers is a challenging problem of both theoretical and practical relevance. Being PSPACE-complete, it provides a canonical model for solving other PSPACE tasks which naturally arise in AI.Effective SAT-based solvers have been designed very recently for the special case of boolean constraints. We propose to consider the more general problem where constraints are arbitrary relations over finite domains. Adopting the viewpoint of constraint-propagation techniques so successful for CSPs, we provide a theoretical study of this problem. Our main result is to propose quantified arc-consistency as a natural extension of the classical CSP notion.
Algorithms for Quantified Constraint Satisfaction Problems Many propagation and search algorithms have been developed for constraint satisfaction problems (CSPs). In a standard CSP all variables are existentially quantified. The CSP formalism can be extended to allow universally quantified variables, in which case the complexity of the basic reasoning tasks rises from NP-complete to PSPACE-complete. Such problems have, so far, been studied mainly in the context of quantified Boolean formulae. Little work has been done on problems with discrete non-Boolean domains. We attempt to fill this gap by extending propagation and search algorithms from standard CSPs to the quantified case. We also show how the notion of value interchangeability can be exploited to break symmetries and speed up search by orders of magnitude. Finally, we test experimentally the algorithms and methods proposed.
Propagating logical combinations of constraints Many constraint toolkits provide logical connectives like disjunction, negation and implication. These permit complex constraint expressions to be built from primitive constraints. However, the propagation of such complex constraint expressions is typically limited. We therefore present a simple and light weight method for propagating complex constraint expressions. We provide a precise characterization of when this method enforces generalized arc-consistency. In addition, we demonstrate that with our method many different global constraints can be easily implemented.
Asymptotically optimal encodings of conformant planning in QBF The world is unpredictable, and acting intelligently requires anticipating possible consequences of actions that are taken. Assuming that the actions and the world are deterministic, planning can be represented in the classical propositional logic. Introducing nondeterminism (but not probabilities) or several initial states increases the complexity of the planning problem and requires the use of quantified Boolean formulae (QBF). The currently leading logic-based approaches to conditional planning use explicitly or implicitly a QBF with the prefix ∃∀∃. We present formalizations of the planning problem as QBF which have an asymptotically optimal linear size and the optimal number of quantifier alternations in the prefix: ∃∀ and ∀∃. This is in accordance with the fact that the planning problem (under the restriction to polynomial size plans) is on the second level of the polynomial hierarchy, not on the third.
An Effective Algorithm for the Futile Questioning Problem In the futile questioning problem, one must decide whether acquisition of additional information can possibly lead to the proof of a conclusion. Solution of that problem demands evaluation of a quantified Boolean formula at the second level of the polynomial hierarchy. The same evaluation problem, called Q-ALL SAT, arises in many other applications. In this paper, we introduce a special subclass of Q-ALL SAT that is at the first level of the polynomial hierarchy. We develop a solution algorithm for the general case that uses a backtracking search and a new form of learning of clauses. Results are reported for two sets of instances involving a robot route problem and a game problem. For these instances, the algorithm is substantially faster than state-of-the-art solvers for quantified Boolean formulas.
Polynomial-Length Planning Spans the Polynomial Hierarchy This paper presents a family of results on the computational complexity of planning: classical, conformant, and conditional with full or partial observability. Attention is restricted to plans of polynomially-bounded length. For conditional planning, restriction to plans of polynomial size is also considered. For this analysis, a planning domain is described by a transition relation encoded in classical propositional logic. Given the widespread use of satisfiability-based planning methods, this is a rather natural choice. Moreover, this allows us to develop a unified representation--in second-order propositional logic--of the range of planning problems considered. By describing a wide range of results within a single framework, the paper sheds new light on how planning complexity is affected by common assumptions such as nonconcurrency, determinism and polynomial-time decidability of executability of actions.
Pushing the envelope: planning, propositional logic, and stochastic search Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning problems many times faster than the best current planning systems. Although stochastic methods have been shown to be very effective on a wide range of scheduling problems, this is the first demonstration of its power on truly challenging classical planning instances. This work also provides a new perspective on representational issues in planning.
QBF Reasoning on Real-World Instances During the recent years, the development of tools for deciding Quantified Boolean Formu- las (QBFs) has been accompanied by a steady supply of real-world instances, i.e., QBFs originated by translations from application domains. Instances of this kind showed to be challenging for current state-of-the-art QBF solvers, while the ability to deal effectively with them is necessary to foster adop- tion of QBF-based reasoning in practice. In this paper we describe three reasoning techniques that we implemented in our solver QUBE++ to increase its performances on real-world instances coming from formal verification and planning domains. We present experimental results that witness the contribu- tion of each technique and the better performances of QUBE++ with respect to other state-of-the-art QBF solvers. The effectiveness of QUBE++ is further confirmed by experiments run on challenging real-world SAT instances, where QUBE++ turns out to be competitive with respect to current state-of- the-art SAT solvers.
Finding a Shortest Solution for the N × N Extension of the 15-PUZZLE Is Intractable
Where "Ignoring delete lists" works: local search topology in planning benchmarks Between 1998 and 2004, the planning community has seen vast progress in terms of the sizes of benchmark examples that domain-independent planners can tackle successfully. The key technique behind this progress is the use of heuristic functions based on relaxing the planning task at hand, where the relaxation is to assume that all delete lists are empty. The unprecedented success of such methods, in many commonly used benchmark examples, calls for an understanding of what classes of domains these methods are well suited for. In the investigation at hand, we derive a formal background to such an understanding. We perform a case study covering a range of 30 commonly used STRIPS and ADL benchmark domains, including all examples used in the first four international planning competitions. We prove connections between domain structure and local search topology - heuristic cost surface properties - under an idealized version of the heuristic functions used in modern planners. The idealized heuristic function is called h+, and differs from the practically used functions in that it returns the length of an optimal relaxed plan, which is NP-hard to compute. We identify several key characteristics of the topology under h+, concerning the existence/non-existence of unrecognized dead ends, as well as the existence/non-existence of constant upper bounds on the difficulty of escaping local minima and benches. These distinctions divide the (set of all) planning domains into a taxonomy of classes of varying h+ topology. As it turns out, many of the 30 investigated domains lie in classes with a relatively easy topology. Most particularly, 12 of the domains lie in classes where FF's search algorithm, provided with h+, is a polynomial solving mechanism. We also present results relating h+ to its approximation as implemented in FF. The behavior regarding dead ends is provably the same. We summarize the results of an empirical investigation showing that, in many domains, the topological qualities of h+ are largely inherited by the approximation. The overall investigation gives a rare example of a successful analysis of the connections between typical-case problem structure, and search performance. The theoretical investigation also gives hints on how the topological phenomena might be automatically recognizable by domain analysis techniques. We outline some preliminary steps we made into that direction.
Semantically-Smart Disk Systems We propose and evaluate the concept of a semantically-smart disk system (SDS). As opposed to a traditional ìsmartî disk, an SDS has detailed knowledge of how the le system above is us- ing the disk system, including information about the on-disk data structures of the le system. An SDS exploits this knowledge to transparently improve performance or enhance functionality be- neath a standard block read/write interface. To automatically acquire this knowledge, we introduce a tool (EOF) that can dis- cover le-system structure for certain types of le systems, and then show how an SDS can exploit this knowledge on-line to understand le-system behavior. We quantify the space and time overheads that are common in an SDS, showing that they are not excessive. We then study the issues surrounding SDS construc- tion by designing and implementing a number of prototypes as case studies; each case study exploits knowledge of some aspect of the le system to implement powerful functionality beneath the standard SCSI interface. Overall, we nd that a surprising amount of functionality can be embedded within an SDS, hinting at a future where disk manufacturers can compete on enhanced functionality and not simply cost-per-byte and performance.
Architecture Elements for Highly-Interactive Business-Oriented Applications It i s now w idely recognized that powerful architecture e lements are needed for implementing highly-interactive business-oriented applications du- ring at least two stages of the whole lifecycle, namely the specification and the design. In this paper, we deal with the a rchitecture model of the TRIDENT project, which introduces three components : the semantic core component, the dialog component and the presentation component. This is a hierarchical object- oriented architecture relying on the use of three kinds of objects : application objects, dialog objects, and interaction objects. Specification and rule languages are given for developing the dialog component. An abstract data model is used for characterizing the application objects. Selection rules are given for choosing appropriate interaction objects for the presentation component according to the abstract data model and to the user level.
Evolving mach 3.0 to a migrating thread model We have modified Mach 3.0 to treat cross-domain remote procedure call (RPC) as a single entity, instead of a sequence of message passing operations. With RPC thus elevated, we improved the transfer of control during RPC by changing the thread model. Like most operating systems, Mach views threads as statically associated with a single task, with two threads involved in an RPC. An alternate model is that of migrating threads, in which, during RPC, a single thread abstraction moves between tasks with the logical flow of control, and "server" code is passively executed. We have compatibly replaced Mach's static threads with migrating threads, in an attempt to isolate this aspect of operating system design and implementation. The key element of our design is a decoupling of the thread abstraction into the execution context and the schedulable thread of control, consisting of a chain of contexts. A key element of our implementation is that threads are now "based" in the kernel, and temporarily make excursions into tasks via upcalls. The new system provides more precisely defined semantics for thread manipulation and additional control operations, allows scheduling and accounting attributes to follow threads, simplifies kernel code, and improves RPC performance. We have retained the old thread and IPC interfaces for backwards compatibility, with no changes required to existing client programs and only a minimal change to servers, as demonstrated by a functional Unix single server and clients. The logical complexity along the critical RPC path has been reduced by a factor of nine. Local RPC, doing normal marshaling, has sped up by factors of 1.7-3.4. We conclude that a migrating-thread model is superior to a static model, that kernel-visible RPC is a prerequisite for this improvement, and that it is feasible to improve existing operating systems in this manner.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.037581
0.028571
0.028571
0.014477
0.005916
0.002872
0.001049
0.000178
0.000005
0
0
0
0
0
Hibernator: helping disk arrays sleep through the winter Energy consumption has become an important issue in high-end data centers, and disk arrays are one of the largest energy consumers within them. Although several attempts have been made to improve disk array energy management, the existing solutions either provide little energy savings or significantly degrade performance for data center workloads.Our solution, Hibernator, is a disk array energy management system that provides improved energy savings while meeting performance goals. Hibernator combines a number of techniques to achieve this: the use of disks that can spin at different speeds, a coarse-grained approach for dynamically deciding which disks should spin at which speeds, efficient ways to migrate the right data to an appropriate-speed disk automatically, and automatic performance boosts if there is a risk that performance goals might not be met due to disk energy management.In this paper, we describe the Hibernator design, and present evaluations of it using both trace-driven simulations and a hybrid system comprised of a real database server (IBM DB2) and an emulated storage server with multi-speed disks. Our file-system and on-line transaction processing (OLTP) simulation results show that Hibernator can provide up to 65% energy savings while continuing to satisfy performance goals (6.5--26 times better than previous solutions). Our OLTP emulated system results show that Hibernator can save more energy (29%) than previous solutions, while still providing an OLTP transaction rate comparable to a RAID5 array with no energy management.
SODA: sensitivity based optimization of disk architecture Storage plays a pivotal role in the performance of many applications. Optimizing disk architectures is a design-time as well as a run-time issue and requires balancing between performance, power and capacity. The design space is large and there are many "knobs" that can be used to optimize disk drive behavior. Here we present a sensitivity-based optimization for disk architectures (SODA) which leverages results from digital circuit design. Using detailed models of the electro-mechanical behavior of disk drives and a suite of realistic workloads, we show how SODA can aid in design and runtime optimization.
Intra-disk Parallelism: An Idea Whose Time Has Come Server storage systems use a large number of disks to achieve high performance, thereby consuming a significant amount of power. In this paper, we propose to significantly reduce the power consumed by such storage systems via intra-disk parallelism, wherein disk drives can exploit parallelism in the I/O request stream. Intra-disk parallelism can facilitate replacing a large disk array with a smaller one, using the minimum number of disk drives needed to satisfy the capacity requirements. We show that the design space of intra-disk parallelism is large and present a taxonomy to formulate specific implementations within this space. Using a set of commercial workloads, we perform a limit study to identify the key performance bottlenecks that arise when we replace a storage array that is tuned to provide high performance with a single high-capacity disk drive. We show that it is possible to match, and even surpass, the performance of a storage array for these workloads by using a single disk drive of sufficient capacity that exploits intra-disk parallelism, while significantly reducing the power consumed by the storage system. We evaluate the performance and power consumption of disk arrays composed of intra-disk parallel drives, and discuss engineering and cost issues related to the implementation and deployment of such disk drives.
Disk Drive Roadmap from the Thermal Perspective: A Case for Dynamic Thermal Management The importance of pushing the performance envelope of disk drives continues to grow, not just in the server market but also in numerous consumer electronics products. One of the most fundamental factors impacting disk drive design is the heat dissipation and its effect on drive reliability, since high temperatures can cause off-track errors, or even head crashes. Until now, drive manufacturers have continued to meet the 40% annual growth target of the internal data rates (IDR) by increasing RPMs, and shrinking platter sizes, both of which have counter-acting effects on the heat dissipation within a drive. As this paper will show, we are getting to a point where it is becoming very difficult to stay on this roadmap. This paper presents an integrated disk drive model that captures the close relationships between capacity, performance and thermal characteristics over time. Using this model, we quantify the drop off in IDR growth rates over the next decade if we are to adhere to the thermal envelope of drive design.We present two mechanisms for buying back some of this IDR loss with Dynamic Thermal Management (DTM). The first DTM technique exploits any available thermal slack, between what the drive was intended to support and the currently lower operating temperature, to ramp up the RPM. The second DTM technique assumes that the drive is only designed for average case behavior, thus allowing higher RPMs than the thermal envelope, and employs dynamic throttling of disk drive activities to remain within this envelope.
Diskgroup: Energy Efficient Disk Layout For Raid1 Systems Energy consumption is becoming an. increasingly important issue in storage systems, especially for high performance data centers and network servers. In this paper, we introduce, a family of energy-efficient disk layouts that generalize the data mirroring of a conventional RAID1 system. The scheme called DiskGroup distributes the workload between: the primary disks and secondary disks based on. the characteristics of the workload. We develop an analytic model to explore the design. space and compute the estimated energy savings and performance as a function of workload characteristics. The analysis shows the potential for significant energy savings over simple RAID1 data mirroring.
Logging RAID - An Approach to Fast, Reliable, and Low-Cost Disk Arrays Parity-based disk arrays provide high reliability and high performance for read and large write accesses at low storage cost. However, small writes are notoriously slow due to the well-known read-modify-write problem. This paper presents logging RAID, a disk array architecture that adopts data logging techniques to overcome the small-write problem in parity-based disk arrays. Logging RAID achieves high performance for a wide variety of I/O access patterns with very small disk space overhead. We show this through trace-driven simulations.
X-RAY: A Non-Invasive Exclusive Caching Mechanism for RAIDs RAID storage arrays often possess gigabytes of RAM forcaching disk blocks. Currently, most RAID systems use LRUor LRU-like policies to manage these caches. Since these arraycaches do not recognize the presence of file system buffer caches,they redundantly retain many of the same blocks as those cachedby the file system, thereby wasting precious cache space. In thispaper, we introduce X-RAY, an exclusive RAID array cachingmechanism. X-RAY achieves a high degree of (but not perfect) exclusivitythrough gray-box methods: by observing which files havebeen accessed through updates to file system meta-data, X-RAYconstructs an approximate image of the contents of the file systemcache and uses that information to determine the exclusive set ofblocks that should be cached by the array. We use microbenchmarksto demonstrate that X-RAY's prediction of the file systembuffer cache contents is highly accurate, and trace-based simulationto show that X-RAY considerably outperforms LRU andperforms as well as other more invasive approaches. The mainstrength of the X-RAY approach is that it is easy to deploy - allperformance gains are achieved without changes to the SCSI protocolor the file system above.
Parity logging overcoming the small write problem in redundant disk arrays Parity encoded redundant disk arrays provide highly reliable, cost effective secondary storage with high performance for read accesses and large write accesses. Their performance on small writes, however, is much worse than mirrored disks—the traditional, highly reliable, but expensive organization for secondary storage. Unfortunately, small writes are a substantial portion of the I/O workload of many important, demanding applications such as on-line transaction processing. This paper presents parity logging, a novel solution to the small write problem for redundant disk arrays. Parity logging applies journalling techniques to substantially reduce the cost of small writes. We provide a detailed analysis of parity logging and competing schemes—mirroring, floating storage, and RAID level 5— and verify these models by simulation. Parity logging provides performance competitive with mirroring, the best of the alternative single failure tolerating disk array organizations. However, its overhead cost is close to the minimum offered by RAID level 5. Finally, parity logging can exploit data caching much more effectively than all three alternative approaches.
Reliability Analysis of Disk Array Organizations by Considering Uncorrectable Bit Errors In this paper, we present an analytic model to study the reliability of some important disk array organizations that have been proposed by others in the literature. These organizations are based on the combination of two options for the data layout, regular RAID-5 and block designs, and three alternatives for sparing, hot sparing, distributed sparing and parity sparing. Uncorrectable bit errors have big effects on reliability but are ignored in traditional reliability analysis of disk arrays. We consider both disk failures and uncorrectable bit errors in the model. The reliability of disk arrays is measured in terms of MTTDL (Mean Time To Data Loss). A unified formula of MTTDL has been derived for these disk array organizations. The MTTDLs of these disk array organizations are also compared using the analytic model.
A low-overhead high-performance unified buffer management scheme that exploits sequential and looping references In traditional file system implementations, the Least Recently Used (LRU) block replacement scheme is widely used to manage the buffer cache due to its simplicity and adaptability. However, the LRU scheme exhibits performance degradations because it does not make use of reference regularities such as sequential and looping references. In this paper, we present a Unified Buffer Management (UBM) scheme that exploits these regularities and yet, is simple to deploy. The UBM scheme automatically detects sequential and looping references and stores the detected blocks in separate partitions of the buffer cache. These partitions are managed by appropriate replacement schemes based on their detected patterns. The allocation problem among the divided partitions is also tackled with the use of the notion of marginal gains. In both trace-driven simulation experiments and experimental studies using an actual implementation in the FreeBSD operating system, the performance gains obtained through the use of this scheme are substantial. The results show that the hit ratios improve by as much as 57.7% (with an average of 29.2%) and the elapsed times are reduced by as much as 67.2% (with an average of 28.7%) compared to the LRU scheme for the workloads we used.
An adaptive partitioning scheme for DRAM-based cache in Solid State Drives Recently, NAND flash-based Solid State Drives (SSDs) have been rapidly adopted in laptops, desktops, and server storage systems because their performance is superior to that of traditional magnetic disks. However, NAND flash memory has some limitations such as out-of-place updates, bulk erase operations, and a limited number of write operations. To alleviate these unfavorable characteristics, various techniques for improving internal software and hardware components have been devised. In particular, the internal device cache of SSDs has a significant impact on the performance. The device cache is used for two main purposes: to absorb frequent read/write requests and to store logical-to-physical address mapping information. In the device cache, we observed that the optimal ratio of the data buffering and the address mapping space changes according to workload characteristics. To achieve optimal performance in SSDs, the device cache should be appropriately partitioned between the two main purposes. In this paper, we propose an adaptive partitioning scheme, which is based on a ghost caching mechanism, to adaptively tune the ratio of the buffering and the mapping space in the device cache according to the workload characteristics. The simulation results demonstrate that the performance of the proposed scheme approximates the best performance.
Unsupervised Learning of Image Transformations We describe a probabilistic model for learning rich, dis- tributed representations of image transformations. The ba- sic model is defined as a gated conditional random field that is trained to predict transformations of its inputs using a factorial set of latent variables. Inference in the model con- sists in extracting the transformation, given a pair of im- ages, and can be performed exactly and efficiently. We show that, when trained on natural videos, the model develops domain specific motion features, in the form of fields of locally transformed edge filters. When trained on affine, or more general, transformations of still images, the model develops codes for these transformations, and can subsequently perform recognition tasks that are invari- ant under these transformations. It can also fantasize new transformations on previously unseen images. We describe several variations of the basic model and provide experi- mental results that demonstrate its applicability to a variety of tasks.
Hierarchical Representation Using NMF.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.010122
0.017703
0.017703
0.007814
0.006452
0.004083
0.00177
0.000633
0.000118
0.000035
0.000001
0
0
0
Expressing default abduction problems as quantified Boolean formulas Abduction is the process of finding explanations for observed phenomena in accord to known laws about a given application domain. This form of reasoning is an important principle of common-sense reasoning and is particularly relevant in conjunction with nonmonotonic knowledge representation formalisms. In this paper, we deal with a model for abduction in which the domain knowledge is represented in terms of a default theory. We show how the main reasoning tasks associated with this particular form of abduction can be axiomatised within the language of quantified Boolean logic. More specifically, we provide polynomial-time constructible reductions mapping a given abduction problem into a quantified Boolean formula (QBF) such that the satisfying truth assignments to the free variables of the latter determine the solutions of the original problem. Since there are now efficient QBF-solvers available, this reduction technique yields a straightforward method to implement the discussed abduction tasks. We describe a realisation of this approach by appeal to the reasoning system QUIP.
Modal Nonmonotonic Logics Revisited: Efficient Encodings for the Basic Reasoning Tasks Modal nonmonotonic logics constitute a well-known family of knowledge-representation formalisms capturing ideally rational agents reasoning about their own beliefs. Although these formalisms are extensively studied from a theoretical point of view, most of these approaches lack generally available solvers thus far. In this paper, we show how variants of Moore's autoepistemic logic can be axiomatised by means of quantified Boolean formulas (QBFs). More specifically, we provide polynomial reductions of the basic reasoning tasks associated with these logics into the evaluation problem of QBFs. Since there are now efficient QBF-solvers, this reduction technique yields a practicably relevant approach to build prototype reasoning systems for these formalisms. We incorporated our encodings within the system QUIP and tested their performance on a class of benchmark problems using different underlying QBF-solvers.
Computing Stable Models with Quantified Boolean Formulas: Some Experimental Results Quantified boolean formulas (QBFs) are extensions of ordi- nary propositional formulas which admit efficient represen- tations of many important reasoning tasks. The existence of sophisticated QBF-solvers makes it possible to realize pro- totype systems for quite different knowledge-representation formalisms in a uniform manner. The system QUIP follows this idea and implements inference tasks from the area of non- monotonic reasoning by using suitable encodings to QBFs. In this paper, we report experimental results evaluating the per- formance of QUIP .I nparticular, we deal here with the dis- junctive logic programming module of QUIP, which will be the subject of two kinds of performance tests: First, we com- pare QUIP with the state-of-the-art logic programming sys- tems dlv and smodels, and second, we examine the per- formance of different QBF-solvers on the considered prob- lem classes. As benchmark philosophy we employ classes of disjunctive logic programs which are responsible for the - hardness of the given decision problems. The results show reasonable performance of the QBF approach and indicate possible improvements of QUIP by exploiting different QBF- solvers as underlying inference engines.
On Computing Solutions to Belief Change Scenarios Belief change scenarios were recently introduced as a framework for expressing different forms of belief change. In this paper, we show how belief revision and belief contraction (within belief change scenarios) can be axiomatised by means of quantified Boolean formulas. This approach has several benefits. First, it furnishes an axiomatic specification of belief change within belief change scenarios. Second, this axiomatisation allows us to identify upper bounds for the complexity of revision and contraction within belief change scenarios.We strengthen these upper bounds by providing strict complexity results for the considered reasoning tasks. Finally, we obtain an implementation of different forms. of belief change by appeal to the existing system QUIP.
An algorithm to evaluate quantified Boolean formulae The high computational complexity of advanced reasoning tasks such as belief revision and planning calls for efficient and reliable algorithms for reasoning problems harder than NP. In this paper we propose Evaluate, an algorithm for evaluating Quantified Boolean Formulae, a language that extends propositional logic in a way such that many advanced forms of propositional reasoning, e.g., reasoning about knowledge, can be easily formulated as evaluation of a QBF. Algorithms for evaluation of QBFs are suitable for the experimental analysis on a wide range of complexity classes, a property not easily found in other formalisms. Evaluate is based on a generalization of the Davis-Putnam procedure for SAT, and is guaranteed to work in polynomial space. Before presenting Evaluate, we discuss all the abstract properties of QBFs that we singled out to make the algorithm more efficient. We also briefly mention the main results of the experimental analysis, which is reported elsewhere.
Complexity of Nested Circumscription and Nested Abnormality Theories Circumscription has been recognized as an important principle for knowledge representa- tion and common-sense reasoning. The need for a circumscriptive formalism that allows for simple yet elegant modular problem representation has led Lifschitz (AIJ, 1995) to introduce nested abnormality theories (NATs) as a tool for modular knowledge representation, tailored for applying circumscription to minimize exceptional circumstances. Abstracting from this particular objective, we propose LCIRC, which is an extension of generic propositional circum- scription by allowing propositional combinations and nesting of circumscriptive theories. As shown, NATs are naturally embedded into this language, and are in fact of equal expressive capability. We then analyze the complexity of LCIRC and NATs, and in particular the effect of nesting. The latter is found to be a source of complexity, which climbs the Polynomial Hierar- chy as the nesting depth increases and reaches PSPACE-completeness in the general case. We also identify meaningful syntactic fragments of NATs which have lower complexity. In partic- ular, we show that the generalization of Horn circumscription in the NAT framework remains coNP-complete, and that Horn NATs without fixed letters can b e efficiently transformed into an equivalent Horn CNF, which implies polynomial solvability of principal reasoning tasks. Finally, we also study extensions of NATs and briefly address the complexity in the first-order case. Our results give insight into the "cost" of using LCIRC (resp. NATs) as a host language for expressing other formalisms such as action theories, narratives, or spatial theories.
Fixed-Parameter Tractability and Completeness I: Basic Results For many fixed-parameter problems that are trivially soluable in polynomial-time, such as ($k$-)DOMINATING SET, essentially no better algorithm is presently known than the one which tries all possible solutions. Other problems, such as ($k$-)FEEDBACK VERTEX SET, exhibit fixed-parameter tractability: for each fixed $k$ the problem is soluable in time bounded by a polynomial of degree $c$, where $c$ is a constant independent of $k$. We establish the main results of a completeness program which addresses the apparent fixed-parameter intractability of many parameterized problems. In particular, we define a hierarchy of classes of parameterized problems $FPT \subseteq W[1] \subseteq W[2] \subseteq \cdots \subseteq W[SAT] \subseteq W[P]$ and identify natural complete problems for $W[t]$ for $t \geq 2$. (In other papers we have shown many problems complete for $W[1]$.) DOMINATING SET is shown to be complete for $W[2]$, and thus is not fixed-parameter tractable unless INDEPENDENT SET, CLIQUE, IRREDUNDANT SET and many other natural problems in $W[2]$ are also fixed-parameter tractable. We also give a compendium of currently known hardness results as an appendix.
Asymptotically optimal encodings of conformant planning in QBF The world is unpredictable, and acting intelligently requires anticipating possible consequences of actions that are taken. Assuming that the actions and the world are deterministic, planning can be represented in the classical propositional logic. Introducing nondeterminism (but not probabilities) or several initial states increases the complexity of the planning problem and requires the use of quantified Boolean formulae (QBF). The currently leading logic-based approaches to conditional planning use explicitly or implicitly a QBF with the prefix ∃∀∃. We present formalizations of the planning problem as QBF which have an asymptotically optimal linear size and the optimal number of quantifier alternations in the prefix: ∃∀ and ∀∃. This is in accordance with the fact that the planning problem (under the restriction to polynomial size plans) is on the second level of the polynomial hierarchy, not on the third.
Automatic OBDD-based generation of universal plans in non-deterministic domains Most real world environments are non-deterministic. Automatic plan formation in non-deterministic domains is, however, still an open problem. In this paper we present a practical algorithm for the automatic generation of solutions to planning problems in nondeterministic domains. Our approach has the followmg main features. First, the planner generates Universal Plans. Second, it generates plans which are guaranteed to achieve the goal in spite of non-determinism, if such plans exist. Otherwise, the planner generates plans which encode iterative trial-and-error strategies (e.g. try to pick up a block until succeed), which are guaranteed to achieve the goal under the assumption that if there is a non-deterministic possibility for the iteration to terminate, this will not be ignored forever. Third, the implementation of the planner is based on symbolic model checking techniques which have been designed to explore efficiently large state spaces. The implementation exploits the compactness of OBDDS (Ordered Binary Decision Diagrams) to express in a practical way universal plans of extremely large size.
Complexity, decidability and undecidability results for domain-independent planning In this paper, we examine how the complexity of domain-independent planning with STRIPS-style operators depends on the nature of the planning operators. We show conditions under which planning is decidable and undecidable. Our results on this topic solve an open problem posed by Chapman (5), and clear up some diculties with his undecidability theorems.
Bootstrapping with Noise: An Effective Regularization Technique Abstract: Bootstrap samples with noise are shown to be an effective smoothness and capacity controltechnique for training feed-forward networks and for other statistical methods such as generalizedadditive models. It is shown that noisy bootstrap performs best in conjunction with weight decayregularization and ensemble averaging. The two-spiral problem, a highly non-linear noise-freedata, is used to demonstrate these findings. The combination of noisy bootstrap and ensembleaveraging is also...
Heuristics for Planning with Penalties and Rewards using Compiled Knowledge. The automatic derivation of heuristic functions for guiding the search for plans in large spaces is a fundamental technique in planning. The type of heuristics that have been considered so far, however, deal only with simple planning models where costs are associated with actions but not with states. In this work we address this limitation by formulating a more ex- pressive planning model and a corresponding heuristic where preferences in the form of penalties and rewards are associ- ated with fluents as well. The heuristic, that is a generaliza- tion of the well-known delete-relaxation heuristic proposed in classical planning, is admissible, informative, but intractable. Exploiting however a correspondence between heuristics and preferred models, and a property of formulas compiled in d- DNNF, we show that if a suitable relaxation of the theory is compiled into d-DNNF, the heuristic can be computed for any search state in time that is linear in the size of the compiled representation. While this representation may have exponen- tial size, as for OBDDs, this is not necessarily so. We report preliminary empirical results, discuss the application of the framework in settings where there are no goals but just pref- erences, and assess further variations and challenges.
Case-Based Support for the Design of Dynamic System Requirements Using formal specifications based on varieties of mathematical logic is becoming common in the process of designing and implementing software. Formal methods are usually intended to include all important de- tails of the final system in the specification with the aim of proving that it possesses certain mathematical properties. In large, complex systems, this task requires sophisticated theorem proving, which can be difficult and complicated. Telecommunication systems are large and complex, making detailed formal specification impractical with current technology. However roughly formal "sketches" of the behaviours these services provide can be produced, and these can be very helpful in locating which service might be relevant to a given problem. Our case-based approach uses coarse-grained requirements specification sketches to outline the basic behaviour of the system's functional modules (called services), thereby allowing us to iden- tify, reuse and adapt requirements (from cases stored in a library) to construct new cases. By using cases that have already been tested, integrated and im- plemented, less effort is needed to produce requirements specifications on a large scale. Using a hypothetical telecommunication system as our example, we shall show how comparatively simple logic can be used to capture coarse- grained behaviour and how a case-based approach benefits from this. The in- put from the examples is used both to identify the cases whose behaviour corresponds most closely to the designer's intentions and to adapt and finally verify the proposed solution against the examples.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.126367
0.065292
0.045153
0.015507
0.009393
0.002738
0.001068
0.000183
0.00002
0.000001
0
0
0
0
Recursive similarity-based algorithm for deep learning Recursive Similarity-Based Learning algorithm (RSBL) follows the deep learning idea, exploiting similarity-based methodology to recursively generate new features. Each transformation layer is generated separately, using as inputs information from all previous layers, and as new features similarity to the k nearest neighbors scaled using Gaussian kernels. In the feature space created in this way results of various types of classifiers, including linear discrimination and distance-based methods, are significantly improved. As an illustrative example a few non-trivial benchmark datasets from the UCI Machine Learning Repository are analyzed.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
SemEval-2012 task 1: English Lexical Simplification We describe the English Lexical Simplification task at SemEval-2012. This is the first time such a shared task has been organized and its goal is to provide a framework for the evaluation of systems for lexical simplification and foster research on context-aware lexical simplification approaches. The task requires that annotators and systems rank a number of alternative substitutes -- all deemed adequate -- for a target word in context, according to how "simple" these substitutes are. The notion of simplicity is biased towards non-native speakers of English. Out of nine participating systems, the best scoring ones combine context-dependent and context-independent information, with the strongest individual contribution given by the frequency of the substitute regardless of its context.
Faster and smaller N-gram language models N-gram language models are a major resource bottleneck in machine translation. In this paper, we present several language model implementations that are both highly compact and fast to query. Our fastest implementation is as fast as the widely used SRILM while requiring only 25% of the storage. Our most compact representation can store all 4 billion n-grams and associated counts for the Google n-gram corpus in 23 bits per n-gram, the most compact lossless representation to date, and even more compact than recent lossy compression techniques. We also discuss techniques for improving query speed during decoding, including a simple but novel language model caching technique that improves the query speed of our language models (and SRILM) by up to 300%.
Out in the Open: Finding and Categorising Errors in the Lexical Simplification Pipeline. Lexical simplification is the task of automatically reducing the complexity of a text by identifying difficult words and replacing them with simpler alternatives. Whilst this is a valuable application of natural language generation, rudimentary lexical simplification systems suffer from a high error rate which often results in nonsensical, non-simple text. This paper seeks to characterise and quantify the errors which occur in a typical baseline lexical simplification system. We expose 6 distinct categories of error and propose a classification scheme for these. We also quantify these errors for a moderate size corpus, showing the magnitude of each error type. We find that for 183 identified simplification instances, only 19 (10.38%) result in a valid simplification, with the rest causing errors of varying gravity.
Putting it simply: a context-aware approach to lexical simplification We present a method for lexical simplification. Simplification rules are learned from a comparable corpus, and the rules are applied in a context-aware fashion to input sentences. Our method is unsupervised. Furthermore, it does not require any alignment or correspondence among the complex and simple corpora. We evaluate the simplification according to three criteria: preservation of grammaticality, preservation of meaning, and degree of simplification. Results show that our method outperforms an established simplification baseline for both meaning preservation and simplification, while maintaining a high level of grammaticality.
For the sake of simplicity: unsupervised extraction of lexical simplifications from Wikipedia We report on work in progress on extracting lexical simplifications (e.g., "collaborate" → "work together"), focusing on utilizing edit histories in Simple English Wikipedia for this task. We consider two main approaches: (1) deriving simplification probabilities via an edit model that accounts for a mixture of different operations, and (2) using metadata to focus on edits that are more likely to be simplification operations. We find our methods to outperform a reasonable baseline and yield many high-quality lexical simplifications not included in an independently-created manually prepared list.
Effect of Non-linear Deep Architecture in Sequence Labeling.
Global Continuation for Distance Geometry Problems Distance geometry problems arise in the determination of protein structure. We consider the case where only a subset of the distances between atoms is given and formulate this distance geometry problem as a global minimization problem with special structure. We show that global smoothing techniques and a continuation approach for global optimization can be used to determine global solutions of this problem reliably and efficiently. The global continuation approach determines a global solution with less computational effort than is required by a standard multistart algorithm. Moreover, the continuation approach usually finds the global solution from any given starting point, while the multistart algorithm tends to fail.
Convex Neural Networks Convexity has recently received a lot of attention in the machine learning community, and the lack of convexity has been seen as a major disad- vantage of many learning algorithms, such as multi-layer artificial neural networks. We show that training multi-layer neural networks in which the number of hidden units is learned can be viewed as a convex optimization problem. This problem involves an infinite number of variables, but can be solved by incrementally inserting a hidden unit at a time, each time finding a linear classifier that minimizes a weighted sum of errors.
Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis Neural networks are a powerful technology forclassification of visual inputs arising from documents.However, there is a confusing plethora of different neuralnetwork methods that are used in the literature and inindustry. This paper describes a set of concrete bestpractices that document analysis researchers can use toget good results with neural networks. The mostimportant practice is getting a training set as large aspossible: we expand the training set by adding a newform of distorted data. The next most important practiceis that convolutional neural networks are better suited forvisual document tasks than fully connected networks. Wepropose that a simple "do-it-yourself" implementation ofconvolution with a flexible architecture is suitable formany visual document problems. This simpleconvolutional neural network does not require complexmethods, such as momentum, weight decay, structure-dependentlearning rates, averaging layers, tangent prop,or even finely-tuning the architecture. The end result is avery simple yet general architecture which can yieldstate-of-the-art performance for document analysis. Weillustrate our claims on the MNIST set of English digitimages.
Restricted Monotonicity A knowledge representation problem can be sometimesviewed as an element of a family of problems,with parameters corresponding to possibleassumptions about the domain under consideration.When additional assumptions are made,the class of domains that are being described becomessmaller, so that the class of conclusions thatare true in all the domains becomes larger. Asa result, a satisfactory solution to a parametricknowledge representation problem on the basis ofsome nonmonotonic...
Striped Tape Arrays A growing number of applications require high capacity, high throughput tertiary storage systems. We are investigating how data striping ideas apply to arrays of magnetic tape drives. Data striping increases throughput and reduces response time for large accesses to a storage system. Striped magnetic tape systems are particularly appealing because many inexpensive magnetic tape drives have low bandwidth; striping may offer dramatic performance improvements for these systems. There are several important issues in designing striped tape systems: the choice of tape drives and robots, whether to stripe within or between robots, and the choice of the best scheme for distributing data on cartridges. One of the most troublesome problems in striped tape arrays is the synchronization of transfers across tape drives. Another issue is how improved devices will affect the desirability of striping in the future. We present the results of simulations comparing the performance of striped tape systems to non-striped systems.
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. Despite the widespread practical success of deep learning methods, our theoretical understanding of the dynamics of learning in deep neural networks remains quite sparse. We attempt to bridge the gap between the theory and practice of deep learning by systematically analyzing learning dynamics for the restricted case of deep linear neural networks. Despite the linearity of their input-output map, such networks have nonlinear gradient descent dynamics on weights that change with the addition of each new hidden layer. We show that deep linear networks exhibit nonlinear learning phenomena similar to those seen in simulations of nonlinear networks, including long plateaus followed by rapid transitions to lower error solutions, and faster convergence from greedy unsupervised pretraining initial conditions than from random initial conditions. We provide an analytical description of these phenomena by finding new exact solutions to the nonlinear dynamics of deep learning. Our theoretical analysis also reveals the surprising finding that as the depth of a network approaches infinity, learning speed can nevertheless remain finite: for a special class of initial conditions on the weights, very deep networks incur only a finite, depth independent, delay in learning speed relative to shallow networks. We show that, under certain conditions on the training data, unsupervised pretraining can find this special class of initial conditions, while scaled random Gaussian initializations cannot. We further exhibit a new class of random orthogonal initial conditions on weights that, like unsupervised pre-training, enjoys depth independent learning times. We further show that these initial conditions also lead to faithful propagation of gradients even in deep nonlinear networks, as long as they operate in a special regime known as the edge of chaos.
Improving the tolerance of multilayer perceptrons by minimizing the statistical sensitivity to weight deviations This paper proposes a version of the backpropagation algorithm which increases the tolerance of a feedforward neural network against deviations in the weight values. These changes can originate either when the neural network is mapped on a given VLSI circuit where the precision and/or weight matching are low, or by physical defects affecting the neural circuits. The modified backpropagation algorithm we propose uses the statistical sensitivity of the network to changes in the weights as a quantitative measure of network tolerance and attempts to reduce this statistical sensitivity while keeping the figures for the usual training performance (in errors and time) similar to those obtained with the usual backpropagation algorithm.
Exploring Sequence Alignment Algorithms On Fpga-Based Heterogeneous Architectures With the rapid development of DNA sequencer, the rate of data generation is rapidly outpacing the rate at which it can be computationally processed. Traditional sequence alignment based on PC cannot fulfill the increasing demand. Accelerating the algorithm using FPGA provides the better performance compared to the other platforms. This paper will explain and classify the current sequence alignment algorithms. In addition, we analyze the different types of sequence alignment algorithms and present the taxonomy of FPGA-based sequence alignment implementations. This work will conclude the current solutions and provide a reference to further accelerating sequence alignment on a FPGA-based heterogeneous architecture.
1.024772
0.024975
0.023598
0.019322
0.013995
0.000241
0
0
0
0
0
0
0
0
Training connectionist models for the structured language model We investigate the performance of the Structured Language Model (SLM) in terms of perplexity (PPL) when its components are modeled by connectionist models. The connectionist models use a distributed representation of the items in the history and make much better use of contexts than currently used interpolated or back-off models, not only because of the inherent capability of the connectionist model in fighting the data sparseness problem, but also because of the sublinear growth in the model size when the context length is increased. The connectionist models can be further trained by an EM procedure, similar to the previously used procedure for training the SLM. Our experiments show that the connectionist models can significantly improve the PPL over the interpolated and back-off models on the UPENN Treebank corpora, after interpolating with a baseline trigram language model. The EM training procedure can improve the connectionist models further, by using hidden events obtained by the SLM parser.
Rational Kernels: Theory and Algorithms Many classification algorithms were originally designed for fixed-size vectors. Recent applications in text and speech processing and computational biology require however the analysis of variable-length sequences and more generally weighted automata. An approach widely used in statistical learning techniques such as Support Vector Machines (SVMs) is that of kernel methods, due to their computational efficiency in high-dimensional feature spaces. We introduce a general family of kernels based on weighted transducers or rational relations, rational kernels , that extend kernel methods to the analysis of variable-length sequences or more generally weighted automata. We show that rational kernels can be computed efficiently using a general algorithm of composition of weighted transducers and a general single-source shortest-distance algorithm. Not all rational kernels are positive definite and symmetric (PDS), or equivalently verify the Mercer condition, a condition that guarantees the convergence of training for discriminant classification algorithms such as SVMs. We present several theoretical results related to PDS rational kernels. We show that under some general conditions these kernels are closed under sum, product, or Kleene-closure and give a general method for constructing a PDS rational kernel from an arbitrary transducer defined on some non-idempotent semirings. We give the proof of several characterization results that can be used to guide the design of PDS rational kernels. We also show that some commonly used string kernels or similarity measures such as the edit-distance, the convolution kernels of Haussler, and some string kernels used in the context of computational biology are specific instances of rational kernels. Our results include the proof that the edit-distance over a non-trivial alphabet is not negative definite, which, to the best of our knowledge, was never stated or proved before. Rational kernels can be combined with SVMs to form efficient and powerful techniques for a variety of classification tasks in text and speech processing, or computational biology. We describe examples of general families of PDS rational kernels that are useful in many of these applications and report the result of our experiments illustrating the use of rational kernels in several difficult large-vocabulary spoken-dialog classification tasks based on deployed spoken-dialog systems. Our results show that rational kernels are easy to design and implement and lead to substantial improvements of the classification accuracy.
An Information Measure For Classification
Self Supervised Boosting Boosting algorithms and successful applications thereof abound for clas- sification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a ran- dom field model by training them to improve classification performance between the data and an equal-sized sample of "negative examples" gen- erated from the model's current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative examples from the model's current Boltzmann distribution. Next, a fea- ture is trained to improve classification performance between data and negative examples. Finally, a coefficient is learned which determines the importance of this feature relative to ones already in the pool. Negative examples only need to be generated once to learn each new feature. The validity of the approach is demonstrated on binary digits and continuous synthetic data.
Global Continuation for Distance Geometry Problems Distance geometry problems arise in the determination of protein structure. We consider the case where only a subset of the distances between atoms is given and formulate this distance geometry problem as a global minimization problem with special structure. We show that global smoothing techniques and a continuation approach for global optimization can be used to determine global solutions of this problem reliably and efficiently. The global continuation approach determines a global solution with less computational effort than is required by a standard multistart algorithm. Moreover, the continuation approach usually finds the global solution from any given starting point, while the multistart algorithm tends to fail.
A Nonparametric Bayesian Approach to Modeling Overlapping Clusters Although clustering data into mutually ex- clusive partitions has been an extremely suc- cessful approach to unsupervised learning, there are many situations in which a richer model is needed to fully represent the data. This is the case in problems where data points actually simultaneously belong to mul- tiple, overlapping clusters. For example a particular gene may have several functions, therefore belonging to several distinct clus- ters of genes, and a biologist may want to discover these through unsupervised model- ing of gene expression data. We present a new nonparametric Bayesian method, the In- finite Overlapping Mixture Model (IOMM), for modeling overlapping clusters. The IOMM uses exponential family distributions to model each cluster and forms an over- lapping mixture by taking products of such distributions, much like products of experts (Hinton, 2002). The IOMM allows an un- bounded number of clusters, and assignments of points to (multiple) clusters is modeled us- ing an Indian Buet Process (IBP), (Griths and Ghahramani, 2006). The IOMM has the desirable properties of being able to focus in on overlapping regions while maintaining the ability to model a potentially infinite num- ber of clusters which may overlap. We derive MCMC inference algorithms for the IOMM and show that these can be used to cluster movies into multiple genres.
Input space versus feature space in kernel-based methods. This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the Kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data.
Principled Hybrids of Generative and Discriminative Models When labelled training data is plentiful, discriminative techniques are widely used since they give excellent generalization performance. However, for large-scale applications such as object recognition, hand labelling of data is expensive, and there is much interest in semi-supervised techniques based on generative models in which the majority of the training data is unlabelled. Although the generalization performance of generative models can often be improved by 'training them discriminatively', they can then no longer make use of unlabelled data. In an attempt to gain the benefit of both generative and discriminative approaches, heuristic procedure have been proposed [2, 3] which interpolate between these two extremes by taking a convex combination of the generative and discriminative objective functions. In this paper we adopt a new perspective which says that there is only one correct way to train a given model, and that a 'discriminatively trained' generative model is fundamentally a new model [7]. From this viewpoint, generative and discriminative models correspond to specific choices for the prior over parameters. As well as giving a principled interpretation of 'discriminative training', this approach opens door to very general ways of interpolating between generative and discriminative extremes through alternative choices of prior. We illustrate this framework using both synthetic data and a practical example in the domain of multi-class object recognition. Our results show that, when the supply of labelled training data is limited, the optimum performance corresponds to a balance between the purely generative and the purely discriminative.
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.
Instant Learning: Parallel Deep Neural Networks and Convolutional Bootstrapping Although deep neural networks (DNN) are able to scale with direct advances in computational power (e.g., memory and processing speed), they are not well suited to exploit the recent trends for parallel architectures. In particular, gradient descent is a sequential process and the resulting serial dependencies mean that DNN training cannot be parallelized effectively. Here, we show that a DNN may be replicated over a massive parallel architecture and used to provide a cumulative sampling of local solution space which results in rapid and robust learning. We introduce a complimentary convolutional bootstrapping approach that enhances performance of the parallel architecture further. Our parallelized convolutional bootstrapping DNN out-performs an identical fully-trained traditional DNN after only a single iteration of training.
Parallel networks that learn to pronounce English text Abstract. This paper describes NETtalk, a class of massively-parallel network systems that learn to convert English text to speech. The memory representations for pronunciations are learned by practice and are shared among many processing units. The performance of NETtalk has some similarities with observed human performance. (i) The learning follows a power law. (;i) The more words the network learns, the better it is at generalizing and correctly pronouncing new words, (iii) The performance of the network degrades very slowly as connections in the network are damaged: no single link or processing unit is essential. (iv) Relearning after damage is much faster than learning during the original training. (v) Distributed or spaced prac-tice is more effective for long-term retention than massed practice. Network models can be constructed that have the same perfor-mance and learning characteristics on a particular task, but differ completely at the levels of synaptic strengths and single-unit responses. However, hierarchical clustering techniques applied to NETtalk re-veal that these different networks have similar internal representations of letter-to-sound correspondences within groups of processing units. This suggests that invariant internal representations may be found in assemblies of neurons intermediate in size between highly localized and completely distributed representations.
Well founded semantics for logic programs with explicit negation . The aim of this paper is to provide asemantics for general logic programs (with negation bydefault) extended with explicit negation, subsumingwell founded semantics [22].The Well Founded semantics for extended logicprograms (WFSX) is expressible by a default theorysemantics we have devised [11]. This relationshipimproves the cross--fertilization between logic programsand default theories, since we generalize previousresults concerning their relationship [3, 4, 7, 1, 2],and there is...
Unbiased estimate of generalization error and model selection in neural network Model selection is based upon the generalization errors of the models in consideration. To estimate the generalization error of a model from the training data, the method of cross-validation and the asymptotic form of the jackknife estimator are used. The average of the predictive errors is used to estimate the generalization error. This estimate is also used as the model selection criterion. The asymptotic form of this estimate is obtained. Asymptotic model selection criterion is also provided for the case when the error function is the penalized negative log-likelihood. In the regression case, it also proves the asymptotic equivalence of Moody's model selection criterion and the cross-validation method under a condition on the error function.
Editorial introduction to the Neural Networks special issue on Deep Learning of Representations.
1.200023
0.200023
0.200023
0.200023
0.133355
0.100012
0.050016
0.020021
0.001168
0.000015
0.000004
0
0
0
IMPALA: matching a protein sequence against a collection of PSI-BLAST-constructed position-specific score matrices. Motivation: Many studies have shown that database searches using position-specific score matrices (PSSMs) or profiles as queries are more effective at identifying distant protein relationships than are searches that use simple sequences as queries. One popular program for constructing a PSSM and comparing it with a database of sequences is Position-Specific Iterated BLAST (PSI-BLAST). Results: This paper describes a new software package, IMPALA, designed for the complementary procedure of comparing a single query sequence with a database of PSI-BLAST-generated PSSMs. We illustrate the use of IMPALA to search a database of PSSMs for protein folds, and one for protein domains involved in signal transduction. IMPALA's sensitivity to distant biological relationships is very similar to that of PSI-BLAST. However, IMPALA employs a more refined analysis of statistical significance and, unlike PSI-BLAST, guarantees the output of the optimal local alignment by using the rigorous Smith-Waterman algorithm. Also, it is considerably faster when run with a large database of PSSMs than is BLAST or PSI-BLAST when run against the complete non-redundant protein database.
Blast: At The Core Of A Powerful And Diverse Set Of Sequence Analysis Tools Basic Local Alignment Search Tool (BLAST) is one of the most heavily used sequence analysis tools available in the public domain. There is now a wide choice of BLAST algorithms that can be used to search many different sequence databases via the BLAST web pages (http://www.ncbi.nlm.nih.gov/BLAST/). All the algorithm-database combinations can be executed with default parameters or with customized settings, and the results can be viewed in a variety of ways. A new online resource, the BLAST Program Selection Guide, has been created to assist in the definition of search strategies. This article discusses optimal search strategies and highlights some BLAST features that can make your searches more powerful.
PatternHunter II: highly sensitive and fast homology search. Extending the single optimized spaced seed of PatternHunter to multiple ones, PatternHunter II simultaneously remedies the lack of sensitivity of Blastn and the lack of speed of Smith-Waterman, for homology search. At Blastn speed, PatternHunter II approaches Smith-Waterman sensitivity, bringing homology search technology back to a full circle.
A Reconfigurable Parallel Disk System for Filtering Genomic Banks Scanning the genomic database is a common task, daily performed by thousands of researchers for extracting new knowledge from this huge amount of data. Today, this represents tens. of Gigabytes. Tomorrow, according to the genomic data exponential growth, the volume of information to manipulate. will reach Terabytes. In this paper we propose a parallel disk system whose originality is to attach reconfigurable computation capabilities near the disk for providing on-the-fly data filtering. Speed-up. is provided both by accessing tens of disk in parallel and by hardwiring efficient filters. A 48 disk system is currently assembled for experimentation.
Efficient Data Access for Parallel BLAST Searching biological sequence databases is one of the most routine tasks in computational biology. This task is significantly hampered by the exponential growth in sequence database sizes. Recent advances in parallelization of biological sequence search applications have enabled bioinformatics researchers to utilize high-performance computing platforms and, as a result, greatly reduce the execution time of their sequence database searches. However, existing parallel sequence search tools have been focusing mostly on parallelizing the sequence alignment engine. While the computation-intensive alignment tasks become cheaper with larger machines, data-intensive initial preparation and result merging tasks become more expensive. Inefficient handling of input and output data can easily create performance bottlenecks even on supercomputers. It also causes a considerable data management overhead. In this paper, we present a set of techniques for efficient and flexible data handling in parallel sequence search applications. We demonstrate our optimizations through improving mpiBLAST, an open-source parallel BLAST tool rapidly gaining popularity. These optimization techniques aim at enabling flexible database partitioning, reducing I/O by caching small auxiliary files and results, enabling parallel I/O on shared files, and performing scalable result processing protocols. As a result, we reduce mpiBLAST users' operational overhead by removing the requirement of prepartitioning databases. Meanwhile, our experiments show that these techniques can bring by an order of magnitude improvement to both the overall performance and scalability of mpiBLAST.
Proceedings of the 24th International Conference on Supercomputing, 2010, Tsukuba, Ibaraki, Japan, June 2-4, 2010
Seed-based genomic sequence comparison using a FPGA/FLASH accelerator. This paper presents a parallel architecture for computing genomic sequence alignments using seed-based algorithms. Originality comes from the simultaneous use of FPGA components and FLASH memories. The FPGA technology brings the computer power while the FLASH memory provides high memory bandwidth able to feed a large array of specific operators. A 64 GBytes FLASH memory connected to a Xilinx Virtex-2 Pro PCI board has been developed and an array of 160 distance-computation operators have been implemented to perform the first step of seed-based alignment algorithms. Compared to the BLAST reference software family, we measured a speed-up of 75 on a real intensive genomic sequence comparison application. © 2006 IEEE.
BMA - Boolean Matrices as Model for Motif Kernels
NCR 3700 - The Next-Generation Industrial Database Computer
A system for adaptive disk rearrangement
A Completeness Result for SLDNF-Resolution Because of the possibility of floundering and infinite derivations, SLDNF-resolution is, in general, not complete. The classical approach [17] to get a completeness result is to restrict the attention to normal programs P and normal goals G, such that P or {G} is allowed and P is hierarchical. Unfortunately, the class of all normal programs and all normal goals meeting these requirements is not powerful enough to be of great practical importance. But after refining the concept of allowedness by taking modes [12] into account, we can broaden the notion of a hierarchical program, and thereby define a subclass of the class of normal programs and normal goals which is powerful enough to compute all primitive recursive functions without losing the completeness of SLDNF-resolution.
Efficient EM Learning with Tabulation for Parameterized Logic Programs We have been developing a general symbolic-statistical modeling language [6,19,20] based on the logic programming framework that semantically unifies (and extends) major symbolic-statistical frameworks such as hidden Markov models (HMMs) [18], probabilistic context-free grammars (PCFGs) [23] and Bayesian networks [16]. The language, PRISM, is intended to model complex symbolic phenomena governed by rules and probabilities based on the distributional semantics[19]. Programs contain statistical parameters and they are automatically learned from randomly sampled data by a specially derived EM algorithm, the graphical EM algorithm. It works on support graphs representing the shared structure of explanations for an observed goal. In this paper, we propose the use of tabulation technique to build support graphs, and show that as a result, the graphical EM algorithm attains the same time complexity as specilized EM algorithms for HMMs (the Baum-Welch algorithm [18]) and PCFGs (the Inside-Outside algorithm [1]).
Read-Once Unit Resolution Read-once resolution is the resolution calculus with the restriction that any clause can be used at most once in the derivation. We show that the problem of deciding whether a propositional CNF-formula has a read-once unit resolution refutation is AT-complete. In contrast, we prove that the problem of deciding whether a formula can be refuted by read-once unit resolution while every proper subformula has no read-once unit resolution refutation is solvable in quadratic time.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1.07812
0.026258
0.017139
0.010424
0.007414
0.001133
0.000442
0.000112
0
0
0
0
0
0
LACIO: A New Collective I/O Strategy for Parallel I/O Systems Parallel applications benefit considerably from the rapid advance of processor architectures and the available massive computational capability, but their performance suffers from large latency of I/O accesses. The poor I/O performance has been attributed as a critical cause of the low sustained performance of parallel systems. Collective I/O is widely considered a critical solution that exploits the correlation among I/O accesses from multiple processes of a parallel application and optimizes the I/O performance. However, the conventional collective I/O strategy makes the optimization decision based on the logical file layout to avoid multiple file system calls and does not take the physical data layout into consideration. On the other hand, the physical data layout in fact decides the actual I/O access locality and concurrency. In this study, we propose a new collective I/O strategy that is aware of the underlying physical data layout. We confirm that the new Layout-Aware Collective I/O (LACIO) improves the performance of current parallel I/O systems effectively with the help of noncontiguous file system calls. It holds promise in improving the I/O performance for parallel systems.
Dma-based prefetching for i/o-intensive workloads on the cell architecture Recent advent of the asymmetric multi-core processors such as Cell Broadband Engine (Cell/BE) has popularized the use of heterogeneous architectures. A growing body of research is exploring the use of such architectures, especially in High-End Computing, for supporting scientific applications. However, prior research has focused on use of the available Cell/BE operating systems and runtime environments for supporting compute-intensive jobs. Data and I/O intensive workloads have largely been ignored in this domain. In this paper, we take the first steps in supporting I/O intensive workloads on the Cell/BE and deriving guidelines for optimizing the execution of I/O workloads on heterogeneous architectures. We explore various performance enhancing techniques for such workloads on an actual Cell/BE system. Among the techniques we explore, an asynchronous prefetching-based approach, which uses the PowerPC core of the Cell/BE for file prefetching and decentralized DMAs from the synergistic processing cores (SPE's), improves the performance for I/O workloads that include an encryption/decryption component by 22.2%, compared to I/O performed naïvely from the SPE's. Our evaluation shows promising results and lays the foundation for developing more efficient I/O support libraries for multi-core asymmetric architectures.
Bridging the Gap Between Parallel File Systems and Local File Systems: A Case Study with PVFS Parallel I/O plays an increasingly important role in today's data intensive computing applications. While much attention has been paid to parallel read performance, most of this work has focused on the parallel file system, middleware, or application layers, ignoring the potential for improvement through more effective use of local storage. In this paper, we present the design and implementation of Segment-structuredOn-disk data Grouping and Prefetching (SOGP), a technique that leverages additional local storage to boost the local data read performance for parallel file systems, especially for those applications with partially overlapped access patterns. Parallel Virtual File System (PVFS) is chosen as an example. Our experiments show that an SOGP-enhanced PVFS prototype system can outperforma traditional Linux-Ext3-based PVFS for many applications and benchmarks, in some tests by as much as 230% in terms of I/O bandwidth.
A Decoupled Execution Paradigm for Data-Intensive High-End Computing High-end computing (HEC) applications in critical areas of science and technology tend to be more and more data intensive. I/O has become a vital performance bottleneck of modern HEC practice. Conventional HEC execution paradigms, however, are computing-centric for computation intensive applications. They are designed to utilize memory and CPU performance and have inherent limitations in addressing the critical I/O bottleneck issues of HEC. In this study, we propose a decoupled execution paradigm (DEP) to address the challenging I/O bottleneck issues. DEP is the first paradigm enabling users to identify and handle data-intensive operations separately. It can significantly reduce costly data movement and is better than the existing execution paradigms for data-intensive applications. The initial experimental tests have confirmed its promising potential. Its data-centric architecture could have an impact in future HEC systems, programming models, and algorithms design and development.
Compiler-based I/O prefetching for out-of-core applications Current operating systems offer poor performance when a numeric application's working set does not fit in main memory. As a result, programmers who wish to solve “out-of-core” problems efficiently are typically faced with the onerous task of rewriting an application to use explicit I/O operations (e.g., read/write). In this paper, we propose and evaluate a fully automatic technique which liberates the programmer from this task, provides high performance, and requires only minimal changes to current operating systems. In our scheme the compiler provides the crucial information on future access patterns without burdening the programmer; the operating system supports nonbinding prefetch and release hints for managing I/O; and the operating systems cooperates with a run-time layer to accelerate performance by adapting to dynamic behavior and minimizing prefetch overhead. This approach maintains the abstraction of unlimited virtual memory for the programmer, gives the compiler the flexibility to aggressively insert prefetches ahead of references, and gives the operating system the flexibility to arbitrate between the competing resource demands of multiple applications. We implemented our compiler analysis within the SUIF compiler, and used it to target implementations of our run-time and OS support on both research and commercial systems (Hurricane and IRIX 6.5, respectively). Our experimental results show large performance gains for out-of-core scientific applications on both systems: more than 50% of the I/O stall time has been eliminated in most cases, thus translating into overall speedups of roughly twofold in many cases.
Hystor: making the best use of solid state drives in high performance storage systems With the fast technical improvement, flash memory based Solid State Drives (SSDs) are becoming an important part of the computer storage hierarchy to significantly improve performance and energy efficiency. However, due to its relatively high price and low capacity, a major system research issue to address is on how to make SSDs play their most effective roles in a high-performance storage system in cost- and performance-effective ways. In this paper, we will answer several related questions with insights based on the design and implementation of a high performance hybrid storage system, called Hystor. We make the best use of SSDs in storage systems by achieving a set of optimization objectives from both system deployment and algorithm design perspectives. Hystor manages both SSDs and hard disk drives (HDDs) as one single block device with minimal changes to existing OS kernels. By monitoring I/O access patterns at runtime, Hystor can effectively identify blocks that (1) can result in long latencies or (2) are semantically critical (e.g. file system metadata), and stores them in SSDs for future accesses to achieve a significant performance improvement. In order to further leverage the exceptionally high performance of writes in the state-of-the-art SSDs, Hystor also serves as a write-back buffer to speed up write requests. Our measurements on Hystor implemented in the Linux kernel 2.6.25.8 show that it can take advantage of the performance merits of SSDs with only a few lines of changes to the stock Linux kernel. Our system study shows that in a highly effective hybrid storage system, SSDs should play a major role as an independent storage where the best suitable data are adaptively and timely migrated in and retained, and it can also be effective to serve as a write-back buffer.
HFS: a performance-oriented flexible file system based on building-block compositions The Hurricane File System (HFS) is designed for (potentially large-scale) shared-memory multiprocessors. Its architecture is based on the principle that, in order to maximize performance for applications with diverse requirements, a file system must support a wide variety of file structures, file system policies, and I/O interfaces. Files in HFS are implemented using simple building blocks composed in potentially complex ways. This approach yields great flexibility, allowing an application to customize the structure and policies of a file to exactly meet its requirements. As an extreme example, HFS allows a file's structure to be optimized for concurrent random-access write-only operations by 10 threads, something no other file system can do. Similarly, the prefetching, locking, and file cache management policies can all be chosen to match an application's access pattern. In contrast, most parallel file systems support a single file structure and a small set of policies. We have implemented HFS as part of the Hurricane operating system running on the Hector shared-memory multiprocessor. We demonstrate that the flexibility of HFS comes with little processing or I/O overhead. We also show that for a number of file access patterns, HFS is able to deliver to the applications the full I/O bandwidth of the disks on our system.
Practical prefetching via data compression An important issue that affects response time performance in current OODB and hypertext systems is the I/O involved in moving objects from slow memory to cache. A promising way to tackle this problem is to use prefetching, in which we predict the user's next page requests and get those pages into cache in the background. Current databases perform limited prefetching using techniques derived from older virtual memory systems. A novel idea of using data compression techniques for prefetching was recently advocated in [KrV, ViK], in which prefetchers based on the Lempel-Ziv data compressor (the UNIX compress command) were shown theoretically to be optimal in the limit. In this paper we analyze the practical aspects of using data compression techniques for prefetching. We adapt three well-known data compressors to get three simple, deterministic, and universal prefetchers. We simulate our prefetchers on sequences of page accesses derived from the OO1 and OO7 benchmarks and from CAD applications, and demonstrate significant reductions in fault-rate. We examine the important issues of cache replacement, size of the data structure used by the prefetcher, and problems arising from bursts of “fast” page requests (that leave virtually no time between adjacent requests for prefetching and book keeping). We conclude that prediction for prefetching based on data compression techniques holds great promise.
Solving Advanced Reasoning Tasks Using Quantified Boolean Formulas We consider the compilation of different reasoning tasks into the evaluation problem of quantified boolean formulas (QBFs) as an approach to develop prototype reasoning sys- tems useful for, e.g., experimental purposes. Such a method is a natural generalization of a similar technique applied to NP-problems and has been recently proposed by other re- searchers. More specifically, we present translations of sev- eral well-known reasoning tasks from the area of nonmono- tonic reasoning into QBFs, and compare their implementa- tion in the prototype system QUIP with established NMR- provers. The results show reasonable performance, and docu- ment that the QBF approach is an attractive tool for rapid pro- totyping of experimental knowledge-representation systems.
Symbolic Boolean manipulation with ordered binary-decision diagrams Ordered Binary-Decision Diagrams (OBDDs) represent Boolean functions as directed acyclic graphs. They form a canonical representation, making testing of functional properties such as satisfiability and equivalence straightforward. A number of operations on Boolean functions can be implemented as graph algorithms on OBDD data structures. Using OBDDs, a wide variety of problems can be solved through symbolic analysis. First, the possible variations in system parameters and operating conditions are encoded with Boolean variables. Then the system is evaluated for all variations by a sequence of OBDD operations. Researchers have thus solved a number of problems in digital-system design, finite-state system analysis, artificial intelligence, and mathematical logic. This paper describes the OBDD data structure and surveys a number of applications that have been solved by OBDD-based symbolic analysis.
MUP: a minimal unsatisfiability prover After establishing the unsatisfiability of a SAT instance encoding a typical design task, there is a practical need to identify its minimal unsatisfiable subsets, which pinpoint the reasons for the infeasibility of the design. Due to the potentially expensive computation, existing tools for the extraction of unsatisfiable subformulas do not guarantee the minimality of the results. This paper describes a practical algorithm that decides the minimal unsatisfiability of any CNF formula through BDD manipulation. This algorithm has a worse-case complexity that is exponential only in the treewidth of the CNF formula. We provide an empirical evaluation of the algorithm, highlighting its efficiency on a set of hard problems as well as its ability to work with existing subformula extraction tools to achieve optimal results.
Disk failures in the real world: what does an MTTF of 1,000,000 hours mean to you? Component failure in large-scale IT installations is becoming an ever larger problem as the number of components in a single cluster approaches a million. In this paper, we present and analyze field-gathered disk replacement data from a number of large production systems, including high-performance computing sites and internet services sites. About 100,000 disks are covered by this data, some for an entire lifetime of five years. The data include drives with SCSI and FC, as well as SATA interfaces. The mean time to failure (MTTF) of those drives, as specified in their datasheets, ranges from 1,000,000 to 1,500,000 hours, suggesting a nominal annual failure rate of at most 0.88%. We find that in the field, annual disk replacement rates typically exceed 1%, with 2-4% common and up to 13% observed on some systems. This suggests that field replacement is a fairly different process than one might predict based on datasheet MTTF. We also find evidence, based on records of disk replacements in the field, that failure rate is not constant with age, and that, rather than a significant infant mortality effect, we see a significant early onset of wearout degradation. That is, replacement rates in our data grew constantly with age, an effect often assumed not to set in until after a nominal lifetime of 5 years. Interestingly, we observe little difference in replacement rates between SCSI, FC and SATA drives, potentially an indication that disk-independent factors, such as operating conditions, affect replacement rates more than component specific factors. On the other hand, we see only one instance of a customer rejecting an entire population of disks as a bad batch, in this case because of media error rates, and this instance involved SATA disks. Time between replacement, a proxy for time between failure, is not well modeled by an exponential distribution and exhibits significant levels of correlation, including autocorrelation and long-range dependence.
Destage Algorithms for Disk Arrays with Nonvolatile Caches In a disk array with a nonvolatile write cache, destages from the cache to the disk are performed in the background asynchronously while read requests from the host system are serviced in the foreground. In this paper, we study a number of algorithms for scheduling destages in a RAID-5 system. We introduce a new scheduling algorithm, called linear threshold scheduling, that adaptively varies the rate of destages to disks based on the instantaneous occupancy of the write cache. The performance of the algorithm is compared with that of a number of alternative scheduling approaches, such as least-cost scheduling and high/low mark. The algorithms are evaluated in terms of their effectiveness in making destages transparent to the servicing of read requests from the host, disk utilization, and their ability to tolerate bursts in the workload without causing an overflow of the write cache. Our results show that linear threshold scheduling provides the best read performance of all the algorithms compared, while still maintaining a high degree of burst tolerance. An approximate implementation of the linear-threshold scheduling algorithm is also described. The approximate algorithm can be implemented with much lower overhead, yet its performance is virtually identical to that of the ideal algorithm.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.05
0.033333
0.033333
0.033333
0.009667
0.003333
0.000061
0.00001
0
0
0
0
0
0
Polynomial terse sets Let A be a set and k ∈ N be such that we wish to know the answers to x 1 ∈ A ?, x 2 ∈ A ?, …, x k ∈ A ? for various k -tuples 〈 x 1 , x 2 , …, x k 〉. If this problem requires k queries to A in order to be solved in polynomial time then A is called polynomial terse or pterse . We show the existence of both arbitrarily complex pterse and non-pterse sets; and that P ≠ NP iff every NP-complete set is pterse. We also show connections with p -immunity, p -selective, p -generic sets, and the boolean hierarchy. In our framework unique satisfiability (and a variation of it called k SAT is, in some sense, “close” to satisfiability.
On the query complexity of clique size and maximum satisfiability This paper explores the bounded query complexity of approximating the size of the maximum clique in a graph (Clique Size) and the number of simultaneously satisfiable clauses in a 3CNF formula (MaxSat). The results in the paper show that for certain approximation factors, approximating Clique Size and MaxSat are complete for cor- responding bounded query classes under metric reductions. The completeness result is important because it shows that queries and approximation are interchangeable: NP queries can be used to solve NP-approximation problems and solutions to NP- approximation problems answer queries to NP oracles. Completeness also shows the existence of approximation preserving reductions from many NP-approximation prob- lems to approximating Clique Size and MaxSat (e.g., from approximating Chromatic Number to approximating Clique Size). Since query complexity is a quantitative com- plexity measure, these results also provide a framework for comparing the complexities of approximating Clique Size and approximating MaxSat. In addition, this paper ex- amines the query complexity of the minimization version of the satisfiability problem, MinUnsat, and shows that the complexity of approximating MinUnsat is very similar to the complexity of approximating Clique Size. Since MaxSat and MinUnsat share the same solution space, the "approximability" of MaxSat is not due to the intrinsic complexity of satisfiability, but is an artifact of viewing the approximation version of satisfiability as a maximization problem.
On the structure of bounded queries to arbitrary NP sets In [Kad87b], Kadin showed that if the Polynomial Hierarchy (PH) has infinitely many levels, then for all $k$, $P^{SAT[k]} \subseteq P^{SAT[k+1]}$. In this paper, we extend Kadin''s technique to show that a proper query hierarchy is not an exclusive property of SAT. In fact, for any $A \in NP \overbrace{low_{3}}$, if PH is infinite, then $P^{A[k]} \subseteq P^{A[k+1]}$. Moreover, for the case of parallel queries, we show that $P^{A||[k+1]}$ is not contained in $P^{SAT||[k]}$. We claim that having a proper query hierarchy is a consequence of the oracle access mechanism and not a result of the ``hardness'''' of a set. To support this claim, we show that assuming PH is infinite, one can construct an intermediate set $B \in NP$ so that $P^{B[k+1]} \subseteq P^{SAT[k]}$. That is, the query hierarchy for $B$ grows as ``tall'''' as the query hierarchy for SAT. In addition, $B$ is intermediate, so it is not ``hard'''' in any sense (e.g., not NP hard under many-one, Turing, or strong nondeterministic reductions). Using these same techniques, we explore some other questions about query hierarchies. For example, we show that is there exists any $A$ such that $P^{A[2]} = P^{SAT[1]}$ then PH collapses to $\Delta^{P}_{3}$.
Lower bounds for constant depth circuits in the presence of help bits The problem of how many extra bits of `help' a constant depth circuit needs in order to compute m functions is considered. Each help bit can be an arbitrary Boolean function. An exponential lower bound on the size of the circuit computing m parity functions in the presence of m-1 help bits is proved. The proof is carried out using the algebraic machinery of A. Razborov (1987) and R. Smolensky (1987). A by-product of the proof is that the same bound holds for circuits with modp gates for a fixed prime p>2. The lower bound implies a random oracle separation for PH and PSPACE, which is optimal in a technical sense
Bounded queries to SAT and the Boolean hierarchy We study the complexity of decision problems that can be solved by a polynomial-time Turing machine that makes a bounded number of queries to an NP oracle. Depending on whether we allow some queries to depend on the results of other queries, we obtain two (probably) different hierarchies. We present several results relating the bounded NP query hierarchies to each other and to the Boolean hierarchy. We also consider the similarly defined hierarchies of functions that can be computed by a polynomial-time Turing machine that makes a bounded number of queries to an NP oracle. We present relations among these two hierarchies and the Boolean hierarchy. In particular we show for all k that there are functions computable with 2 k parallel queries to an NP set that are not computable in polynomial time with k serial queries to any oracle, unless P = NP. As a corollary k + 1 parallel queries to an NP set allow us to compute more functions than are computable with only k parallel queries to an NP set, unless P = NP; the same is true of serial queries. Similar results hold for all tt-self-reducible sets. Using a “mind-change” technique, we show that 2 k - 1 parallel queries to an NP set allow us to accept in polynomial time exactly the same sets as can be accepted in polynomial time with k serial queries to an NP set. (In fact, the same is true for any class in place of NP that is closed under polynomial-time positive-bounded-truth-table reductions.) This contrasts with the expected result for function computations with an NP oracle (Beigel, 1988). In addition we show that the Boolean hierarchy and the bounded query hierarchies (of languages) either stand or collapse together. Finally we show that if the Boolean hierarchy collapses to any level but the zeroth (deterministic polynomial time), then for all k there are functions computable in polynomial time with k parallel queries to an NP set that are not computable in polynomial time with k - 1 serial queries to any set (NP-complete sets are p-superterse).
On the Power of Deterministic Reductions to C=P The counting class C = P, which captures the notion of "exact counting," while extremely powerful under various nondeterministic reductions, is quite weak under polynomial-time deterministic reductions. We discuss the analogies between NP and co-C = P, which allow us to derive many interesting results for such deterministic reductions to co-C = P. We exploit these results to obtain some interesting oracle separations. Most importantly, we show that there exists an oracle A such that +P(A) not-subset-or-equal-to P(C=PA) and BPP(A) not-subset-or-equal-to P(C=PA). Therefore, techniques that would prove that C = P and PP are polynomial-time Turing equivalent, or that C = P is polynomial-time Turing hard for the polynomial-time hierarchy, would not relativize.
The Logarithmic Alternation Hierarchy Collapses: A \sum^\calL_2=APi^\calL_2
Recognizing when greed can approximate maximum independent sets is complete for parallel access to NP Bodlaender, Thilikos, and Yamazaki (1997) investigate the computational complexity of the problem of whether the Minimum Degree Greedy Algorithm can approximate a maximum independent set of a graph within a constant factor of r, for fixed rational r greater than or equal to 1. They denote this problem by S-r and prove that for each rational r greater than or equal to 1, S-r is coNP-hard. They also provide a P-NP upper bound of S-r, leaving open the question of whether this gap between the upper and the lower bound of S-r can be closed. For the special case of r = 1, they show that S-1 is even DP-hard, again leaving open the question of whether S-1 can be shown to be complete for DP or some larger class such as P-NP. In this note, we completely solve all the questions left open by Bodlaender et al. Our main result is that for each rational r greater than or equal to 1, S-r is complete for p(parallel to)(NP), the class of sets solvable via parallel access to NP. (C) 1998 Elsevier Science B.V.
Some Results on the Complexity of Planning with Incomplete Information Planning with incomplete information may mean a number ofdifferent things; that certain facts of the initial state are not known, thatoperators can have random or nondeterministic effects, or that the planscreated contain sensing operations and are branching. Study of the complexityof incomplete information planning has so far been concentratedon probabilistic domains, where a number of results have been found. Weexamine the complexity of planning in nondeterministic propositional...
Parallel non-binary planning in polynomial time This paper formally presents a class of planning problems which allows non-binary state variables and parallel execution of actions. The class is proven to be tractable, and we provide a sound and complete polynomial time algorithm for planning within this class. This result means that we are getting closed to tackling realistic planning problems in sequential control, where a restricted problem representation is often sufficient, but where the size of the problems make tractability an important issue.
Formalizing narratives using nested circumscription Abstract Representing and reasoning about narratives together with the ability to do hypothetical reasoning is important for agents in a dynamic,world. These agents need to record their observations and action executions as a narrative and at the same time, to achieve their goals against a changing environment, they need to make,plans (or re-plan) from the current situation. The early action formalisms did one or the other. For example, while the original situation calculus was meant for hypothetical reasoning and planning, the event calculus was more appropriate for narratives. Recently, there have been some attempts at developing formalisms that do both. Independently, there has also been a lot of recent research in reasoning about actions using circumscription. Of particular interest to us is the research on using high-level languages,and their logical representation using nested abnormality,theories (NATs)—a form,of circumscription with blocks that make,knowledge,representation modular. Starting from theories in the high-level languageL, which is extended to allow concurrent actions, we define a translation to NATs that preserves both narrative and hypothetical reasoning. We initially use the high level languageL, and then extend it to allow concurrent actions. In the process, we study several knowledge representation issues such as filtering, and restricted monotonicity with respect to NATs. Finally, we compare our formalization with other approaches, and discuss how our use of NATs makes it easier to incorporate other features of action theories, such as constraints, to our formalization. © 1998 Elsevier Science B.V. All rights reserved. Keywords: Narratives; Nested abnormality theories; Circumscription; Reasoning about actions; Value
Network file storage with graceful performance degradation A file storage scheme is proposed for networks containing heterogeneous clients. In the scheme, the performance measured by file-retrieval delays degrades gracefully under increasingly serious faulty circumstances. The scheme combines coding with storage for better performance. The problem is NP-hard for general networks; and this article focuses on tree networks with asymmetric edges between adjacent nodes. A polynomial-time memory-allocation algorithm is presented, which determines how much data to store on each node, with the objective of minimizing the total amount of data stored in the network. Then a polynomial-time data-interleaving algorithm is used to determine which data to store on each node for satisfying the quality-of-service requirements in the scheme. By combining the memory-allocation algorithm with the data-interleaving algorithm, an optimal solution to realize the file storage scheme in tree networks is established.
Flexibility and performance of parallel file systems As we gain experience with parallel file systems, it becomes increasingly clear that a single solution does not suit all applications. For example, it appears to be impossible to find a single appropriate interface, caching policy, file structure, or disk-management strategy. Furthermore, the proliferation of file-system interfaces and abstractions make applica- tions difficult to port. We propose that the traditional functionality of parallel file systems be separated into two components: a fixed core that is standard on all plat- forms, encapsulating only primitive abstractions and interfaces, and a set of high-level libraries to provide a variety of abstractions and application- programmer interfaces (APIs). We present our current and next-generation file systems as examples of this structure. Their features, such as a three-dimensional file structure, strided read and write interfaces, and I/O-node programs, are specifically designed with the flexibility and performance necessary to support a wide range of applications.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1.026147
0.02653
0.026436
0.018593
0.00851
0.002893
0.000463
0.000115
0.000005
0
0
0
0
0
Evaluation of GPU-based Seed Generation for Computational Genomics Using Burrows-Wheeler Transform Unprecedented production of short reads from the new high-throughput sequencers has posed challenges to align short reads to reference genomes with high sensitivity and high speed. Many CPU-based short read aligners have been developed to address this challenge. Among them, one popular approach is the seed-and-extend heuristic. For this heuristic, the first and foremost step is to generate seeds between the input reads and the reference genome, where hash tables are the most frequently used data structure. However, hash tables are memory-consuming, making it not well-suited to memory-stringent many-core architectures, like GPUs, even though they usually have a nearly constant query time complexity. The Burrows-Wheeler transform (BWT) provides a memory-efficient alternative, which has the drawback of having query time complexity as a function of query length. In this paper, we investigate GPU-based fixed-length seed generation for computational genomics based on the BWT and Ferragina Manzini (FM)-index, where k-mers from the reads are searched against a reference genome (indexed using BWT) to find k-mer matches (i.e. seeds). In addition to exact matches, mismatches are allowed at any position within a seed, different from spaced seeds that only allow mismatches at predefined positions. By evaluating the relative performance of our GPU version to an equivalent CPU version, we intend to provide some useful guidance for the development of GPU-based seed generators for aligners based on the seed-and-extend paradigm.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Protecting RAID Arrays against Unexpectedly High Disk Failure Rates Disk failure rates vary so widely among different makes and models that designing storage solutions for the worst case scenario is a losing proposition. The approach we propose here is to design our storage solutions for the most probable case while incorporating in our design the option of adding extra redundancy when we find out that its disks are less reliable than expected. To illustrate our proposal, we show how to increase the reliability of existing two-dimensional disk arrays with n^2 data elements and 2n parity elements by adding n additional parity elements that will mirror the contents of half the existing parity elements. Our approach offers the three advantages of being easy to deploy, not affecting the complexity of parity calculations, and providing a five-year reliability of 99.999 percent in the face of catastrophic levels of data loss where the array would lose up to a quarter of its storage capacity in a year.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Data Migration Scheme Considering Node Reliability for an Autonomous Distributed Storage System We have proposed a distributed storage system which aims to utilize storage fragments distributed in network nodes without administrators' maintenance cost. Even the performance of storage devices are not uniform, autonomously data block migration helps to achieve enhancing the whole I/O performance of the storage system. However, we have considered all the storage devices to be reliable equally. On this assumption, migration of an important data block to unstable storage might cause degradation of the availability of the data block. In this paper, we propose a scheme to choose the destination of data block migration by evaluating the nodes' reliability along with the performance of storage devices. This scheme aims to improve system stability by preventing to locate a critical data onto an unstable storage device. The storage nodes which lack stability are also utilized for storing a data block which can be easily regenerated or duplicated. The results of the preliminary investigation show that a large number of warning logs from Machine Check Architecture may suggest a sign of unexpected system shutdown.
A logic for default reasoning The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Extended stable semantics for normal and disjunctive programs
The Stable Model Semantics for Logic Programming We propose a new declarative semantices for logic programs with negation.Its formulation is quite simple;at the same time, it is more general than the iterated fixed point semantics for stratified programs,and is applicable to some useful programs that are not stratified.
Classical Negation in Logic Programs and Disjunctive Databases An important limitation of traditional logic programming as a knowledge representation tool, in comparison with classical logic, is that logic programming does not allow us to deal directly with incomplete information. In order to overcome this limitation, we extend the class of general logic programs by including classical negation, in addition to negation-as-failure. The semantics of such extended programs is based on the method of stable models. The concept of a disjunctive database can be extended in a similar way. We show that some facts of commonsense knowledge can be represented by logic programs and disjunctive databases more easily when classical negation is available.Computationally, classical negation can be eliminated from extended programs by a simple preprocessor. Extended programs are identical to a special case of default theories in the sense of Reiter.
Improvements to the Evaluation of Quantified Boolean Formulae We present a theorem-prover for quantified Boolean formulae and evaluate it on random quantified formulae and formulae that represent problems from automated planning. Even though the notion of quantified Boolean formula is theoretically important, automated reasoning with QBF has not been thoroughly investigated. Universal quantifiers are needed in representing many computational problems that cannot be easily translated to the propositional logic and solved by satisfiability algorithms. Therefore efficient reasoning with QBF is important. The Davis-Putnam procedure can be extended to evaluate quantified Boolean formulae. A straightforward algorithm of this kind is not very efficient. We identify universal quantifiers as the main area where improvements to the basic algorithm can be made. We present a number of techniques for reducing the amount of search that is needed, and evaluate their effectiveness by running the algorithm on a collection of formulae obtained from planning and generated randomly. For the structured problems we consider, the techniques lead to a dramatic speed-up.
Multi-level transaction management for complex objects: implementation, performance, parallelism Multi-level transactions are a variant of open-nested transactions in which the subtransactions correspond to operations at different levels of a layered system architecture. They allow the exploitation of semantics of high-level operations to increase concurrency. As a consequence, undoing a transaction requires compensation of completed subtransactions. In addition, multi-level recovery methods must take into consideration that high-level operations are not necessarily atomic if multiple pages are updated in a single subtransaction. This article presents algorithms for multi-level transaction management that are implemented in the database kernel system (DASDBS). In particular, we show that multi-level recovery can be implemented in an efficient way. We discuss performance measurements using a synthetic benchmark for processing complex objects in a multi-user environment. We show that multi-level transaction management can be extended easily to cope with parallel subtransactions within a single transaction. Performance results are presented with varying degrees of inter- and intratransaction parallelism.
Nonlinear component analysis as a kernel eigenvalue problem A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map-for instance, the space of all possible five-pixel products in 16 x 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.
Performance of a mirrored disk in a real-time transaction system Disk mirroring has found widespread use in computer systems as a method for providing fault tolerance. In addition to increasing reliability, a mirrored disk can also reduce I/O response time by supporting the execution of parallel I/O requests. The improvement in I/O efficiency is extremely important in a real-time system, where each computational entity carries a deadline. In this paper, we present two classes of real-time disk scheduling policies, RT-DMQ and RT-CMQ, for a mirrored disk I/O subsystem and examine their performance in an integrated real-time transaction system. The real-time transaction system model is validated on a real-time database testbed, called RT-CARAT. The performance results show that a mirrored disk I/O subsystem can decrease the fraction of transactions that miss their deadlines over a single disk system by 68%. Our results also reveal the importance of real-time scheduling policies, which can lead up to a 17% performance improvement over non-real-time policies in terms of minimizing the transaction loss ratio.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.
An A Prolog decision support system for the Space Shuttle The goal of this paper is to test if a programming methodology based on the declarative language A-Prolog and the systems for computing answer sets of such programs, can be successfully applied to the development of medium size knowledge-intensive applications. We report on a successful design and development of such a system controlling some of the functions of the Space Shuttle.
Domain adaptation for object recognition: An unsupervised approach Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.
An Unsupervised Feature Learning Approach to Improve Automatic Incident Detection. Sophisticated automatic incident detection (AID) technology plays a key role in contemporary transportation systems. Though many papers were devoted to study incident classification algorithms, few study investigated how to enhance feature representation of incidents to improve AID performance. In this paper, we propose to use an unsupervised feature learning algorithm to generate higher level features to represent incidents. We used real incident data in the experiments and found that effective feature mapping function can be learnt from the data crosses the test sites. With the enhanced features, detection rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are significantly improved in all of the three representative cases. This approach also provides an alternative way to reduce the amount of labeled data, which is expensive to obtain, required in training better incident classifiers since the feature learning is unsupervised. © 2012 IEEE.
Learning Topic Representation For Smt With Neural Networks Statistical Machine Translation (SMT) usually utilizes contextual information to disambiguate translation candidates. However, it is often limited to contexts within sentence boundaries, hence broader topical information cannot be leveraged. In this paper, we propose a novel approach to learning topic representation for parallel data using a neural network architecture, where abundant topical contexts are embedded via topic relevant monolingual data. By associating each translation rule with the topic representation, topic relevant rules are selected according to the distributional similarity with the source text during SMT decoding. Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
1
0
0
0
0
0
0
0
0
0
0
0
0
0